category
stringclasses 191
values | search_query
stringclasses 434
values | search_type
stringclasses 2
values | search_engine_input
stringclasses 748
values | url
stringlengths 22
468
| title
stringlengths 1
77
| text_raw
stringlengths 1.17k
459k
| text_window
stringlengths 545
2.63k
| stance
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|
Paleobotany | Was the Silurian period the birth of the first land plants? | no_statement | the "silurian" "period" was not the "birth" of the first "land" "plants".. "land" "plants" did not emerge during the "silurian" "period". | https://gotbooks.miracosta.edu/oceans/chapter2.html | gotbooks.miracosta.edu/oceans | Introduction to Oceanography
Chapter 2 - Evolution of Life Through Time
This chapter is a brief summary of the evolution of life on Earth through time.
Historical geology is the science that examines concepts of evolution and geologic time as preserved in the fossil record. Historical geology is relevant to all other sciences that involve studies of the physical environment!. This chapter is a very brief summary of the history of life and discussions about some major geologic events shaping planet Earth. Figure 2-1 highlights many of the key geological and biological events that occurred, impacting life, leading to the present.
Earth formed from the accumulation of dust, gases, asteroids, and small planetesimal in the stellar nebula (as discussed in Chapter 1). During this early period in Earth history conditions on the surface of the planet were probably too hot for oceans to exist. However, over time the surface cooled enough for oceans to form and persist. However, the oceans and atmosphere were chemically very different than what exists today. The Early Earth had no significant free oxygen in the air or oceans, and the oceans were rich in organic compounds, essential for the development of evolution and life. The oldest sedimentary rocks on Earth preserve evidence of biological activity, but only on a primitive microbial level. Early evolution was taking place on the molecular, intercellular, and microbial scales for the first 3 billion years of Earth's history. Eventually primitive life forms began to use photosynthesis as a source of energy, and gradually (over a billion years) the atmosphere and oceans became an oxygen-rich environment allowing more complex life forms to evolve.
Click on thumbnail images for a larger view.
Fig. 2-1. Geologic Time Scale with highlights in evolution and events in Earth history.
What Do All Living Organisms Have In Common?
All living things are made of cellsâthe minimal functional structure of all living organisms. Cells perform all the functions for survival in their environment.
All living things use energy (have a metabolism).
All living things display growth.
All living things are capable of reproduction.
All living thing adapt to environmental conditions. They either respond to environmental changes or die off.
All living things display movement, ranging from fast to very slow (including internal movement of cytoplasm).
Origin of All Living Things On Earth
The age of the Earth within the Solar System is in the range of +4.5 billion years. The oldest rocks that preserve fossil evidence of life have been discovered in Australia and Africa that date to around 3.5 billion years, with traces of possible organic residues dating back as much as 4.2 billion years. Biologists use the term abiogenesis to describe the natural processes by which life has arisen from non-living matter composed of simple organic compounds. Life functions though the interactions of carbon compounds including lipids, carbohydrates, and amino acids. These compounds are essential for the cell membranes, movement of internal fluids for respiration, manufacture of proteins, and self-replicating molecules (nucleic acids: DNA and RNA). The big questions yet to be resolved is whether all living things on Earth evolved from a common primordial ancestor (or ancestors) here on our planet, or whether life came to Earth from and extraterrestrial sources (suggesting that evolution of living things may be widespread throughout the galaxy and beyond).
2.2
Key Developments In Understanding the Origin Of Life On Earth
Carl Linnaeus was a Swedish botanist, physician, and zoologist (lived 1707-1778), who laid the foundations for the modern scheme of binomial nomenclature. Lineaus is considered a founder of modern taxonomy and ecology (Figure 2-2). For instance, humans are called Homo sapiens in binomial nomenclature.
Linnaeus's system of classification grouped organisms based on shared characteristics. Modern taxonomy attempts to connect taxonomy to the evolutionary framework of shared common ancestors (commonly referred to as the evolutionary tree of life; see below). In the past three centuries, millions of species have been identified and classified, but the lineages of different species are constantly being revised as new information becomes available.
Charles Darwin (1809-1882), a scientist/explorer, is credited with presenting the first published work dedicated to natural selection in his book entitled Origin of Species (published in 1859) (Figure 2-3). Darwin's theory on natural selection is now considered among be the main processes that brings about biologicalevolution. Darwin's book is a compilation of his observations and thoughts about plants, animals, and fossils initially gathered during a five-year voyage around the world studying nature onboard the Royal Navy ship, the HMS Beagle. Natural selection is the processes whereby organisms that are better adapted to their environment tend to survive and produce more offspring. Note: Darwin did not release his research for nearly two decades after the expedition largely out of fear of repression, but his work arguably became one of the world's greatest scientific works of modern times.
Gregor Johann Mendel (1822-1884) was an Austrian geneticist/researcher (and monk) who conducted experimental research on creating hybrids of garden peas. In 1865 and 1866, he published his research on how hereditary characteristics are passed from parent organisms to their offspring. Mendelian Theory is fundamental to much of what is known about modern genetics theories (Figure 2-4).
Over the past two centuries, many scientific discoveries and technological innovations have advanced our knowledge of biochemistry, cell structure and processes, and genetic evolution. In 1951, James Watson and Francis Crick discovered and reported the double helical structure of the DNA molecule (Figure 2-5). Today, the entire genetic structure of human DNA has been mapped and reported via the Human Genome Project (2001). Genome mapping is now central to many kinds of biological and medical research.
Fig. 2-2. Carl Linnaeus (1707-1778) is considered founder of modern taxonomy and ecology.
Evolution
Evolution means (in general usage) the gradual development of something, especially from simple to more complex forms. In biological sciences, evolution involves the processes by which different kinds of living organisms are thought to have developed and diversified from earlier forms during the history of the Earth.
Biological evolution also involves changes in heritable genetic traits within biological populations over successive generations (first described by Gregor Johann Mendel in 1865). Evolution occurs at many scales including the molecular level, cell level, organism level, species level, and ecosystem community level.
Biologists and paleontologist have been studying how different species of plants, animals, and microorganisms are related to each other for centuries. Taxonomy is the science of naming, describing and classifying organisms and includes all plants, animals and microorganisms. The Linnaeus's system of classification is used to group organisms. Figure 2-6 illustrates how living things are classified based on shared or identifying characteristics. Figure 2-7 illustrates the hierarchical classification of living things, subdivided into increasingly smaller groups down to the individual species. From largest group downward the order is as follows: kingdom, phylum, class, order, family, genus, species, and subspecies. To illustrate, Figure 2-8 illustrates the taxonomic classification of a common house cat (Felis catus). Note that the name of an organism is always written with a genus and species name, written in italics. Most familiar animals and plants have a common name (that is not written in italics). Figure 2-9 illustrates the classification of dogs (down to the subspecies name. Note that both cats and dogs both fall within the classification within the order Carnivora (meaning carnivores). This classification suggests that all organisms classified as Carnivora share common traits that link them to their ancestral heritage.
Fig. 2-6. Evolution and classification of living things (illustrated) based of shared or identifying characteristics.
Fig. 2-8. Classification (taxonomy) of a common house cat (species name: Felis catus). Different varieties of cats have common names, but or divided into subspecies.
Fig. 2-9. Classification (taxonomy) of a dog (Canus lupis familiaris). Note that all the varieties of dogs are included in this species. Wolves are Canus lupis.
2.4
Evolution
Evolution means (in general usage) the gradual development of something, especially from simple to more complex forms. In biological sciences, evolution involves the processes by which different kinds of living organisms are thought to have developed and diversified from earlier forms during the history of the Earth.
Biological evolution also involves changes in heritable genetic traits within biological populations over successive generations (first described by Gregor Johann Mendel in 1865). Evolution occurs at many scales including the molecular level, cell level, organism level, species level, and ecosystem community level.
Classification Of Living Things Linked To Evolution Through Geologic Time
The evolutionary "Tree of Life" is a graphical or hypothetical conceptualization that links all species back to their deep ancestral heritage, far back in geologic time, to a common ancestral origin billions of years ago. For instance, Figure 2-10 illustrates an example of the Tree of Life presented by an organization of research scientists called OneZoom.
Biologist, paleontologist, and other scientists have been classifying, and reclassifying species now for hundreds of years. Current thought is that there is roughly 8.7 million species of plants and animals in existence, but that number is constantly changing.
Fig. 2-10. The evolutionary "Tree of Life" as illustrated by https://www.onezoom.org/. Click the link to explore the the details an exhaustive classification of all known living things, including of millions of organisms (species) illustrated.
2.5
Evolutionary Theory Highlights
Evolutionary theory is a an essential component of the knowledge supporting the current geologic time scale.
⢠Evolution supports an old earth (~4.56 billion years).
⢠The different time periods represented on the geologic time scale have uniquely defined populations of fossil species representative of those ages.
Divergent Evolution
⢠Populations that are separated environmentally can develop different features based upon an adaptation to their environment.
⢠One group of organisms can radiate (or diversify) into many different groups and species.
⢠Divergence leads to different and distinct populations and communities of organisms.
Convergent Evolution
Populations can develop similar features based upon a utilizing a similar environment and living habits. The term niche is used in biology to define an organism's role in an ecosystem.
Examples of Convergent Evolution:
⢠Both fish and marine mammals developed streamline bodies to swim efficiently.
⢠Marine mammals developed fur/thick blubber to protect them from cold waters.
⢠Modern marine mammals share many of the same physical traits and life habits that ancient marine reptiles had before their disappearance from a mass-extinction event at the end of the Cretaceous Period (about 66 million years ago; discussed below).
⢠Birds, bats, flying squirrels, insects, and flying fish all independently developed means in order to take flight.
⢠Marsupial mammals in Australia adapted similar characteristics of mammal elsewhere (see table below).
Populations that evolve in separate settings
may develop similar traits (convergence)
Examples:
(niches)
Marsupial mammals
in Australia
Mammals
elsewhere
Birthing manner
Marsupial
Placental
Grazers
Kangaroo
Deer
Carnivores
Tasmanian wolves
Wolves/Dogs
Climbers
Koalas
Monkeys
2.6
How Evolution Works
The life mission of individuals in any species is to eat, survive, and reproduce (Figure 2-11).
While living, individuals must deal with competition (within a population of their own species, or with other species).
Individuals must also adapt to environmental changes (changes in living space, availability of food resources, climate changes, catastrophes, etc.).
As time passes, species with either adapt to changing situations (and evolve), or they face die offs or extinction.
Fig. 2-11. How evolution works.
All species have a role within an ecosystem.
The term niche refers to the specific area inhabited by an organism. The term niche also refers to the role or function of a species within an ecosystem, involving the interrelationships of a species with all the biotic and abiotic factors affecting it. All species fill a niche, ranging from limited, small micro-environments to a distribution on a regional or even global scale in a multitude of environmental settings.
2.7
Essential Concepts of Historical Geology & Evolution
The geologic time scale is a systematic and chronological organization of time related to the history of the Earth and Universe used by scientists (geologists, paleontologists, astronomers) to describe the timing and relationships between events that have occurred (see Figure 2-1).
Paleontology is the scientific study of life forms existing in former geologic periods, as represented by their fossils; the science involves reconstructing the physical characteristics of organisms, life habits, and the environments where they lived (paleoecology).
A fossil is a remnant or trace of an organism of a some earlier geologic age, such as a skeleton or leaf imprint, embedded and preserved in sedimentary deposits. Few things living today will survive to become fossils (see table to right on how fossils form).
The term fossil record is used by geologists and paleontologists (scientists who study paleontology) to refer tothe total number of fossils that have been discovered, as well as to the information derived from them. Many species that we see today do not get a chance to be preserved as fossils, but we can still learn about them by comparing them to fossils that have been found and properly recorded.
Fossilization is the processes that turn plant or animal remains eventually to stone. The table to the right reviews how something survives destruction to become a fossil.
A trace fossil is a fossil impression of a footprint, trail, burrow, or other trace of an animal rather than of the animal itself. Sedimentary rocks often display an abundance of traces of how organisms interacted with their environment.
Sedimentary Sequences Preserve the Fossil Record
The history of the evolution of life is partly preserved in sedimentary rocks found around the world. The ancient history of a species is also preserved in the DNA of living organisms. Although the fossil record is extensive, there are many gaps in the fossil record where sediments of different ages have not been preserved in many regions, and much has been erased as ocean crust is destroyed in the processes involving plate tectonics (discussed in Chapter 4). Also, ancient sedimentary deposits on continental are destroyed by erosion. Despite these issues, sedimentary deposits representing all geologic ages are preserved and exposed in different places around the world. The fossil record is best preserved and represented by sedimentary deposits associated with ancient shallow marine and coastal environmental settings preserved and exposed in continental settings.
Transgressions and Regressions of Ancient Shallow Seas
Figure 2-12 shows how sea level rose and fell through the ages across North America. A transgression occurs when sea levels rise and shallow seas advance onto the margins of a continent. When sea level falls, the seas retreat and land is exposedâa process called a regression. For much of the last billion years shallow seaways transgressed onto the North American continent. Many minor transgressions and regressions also occurred, and shallow seas intermittently covered large portions of the continent. When sea level rose, sediments were deposited blanketing large regions of the continents. These deposits are preserved as sedimentary rock formations that accumulated in terrestrial, coastal, and marine depositional environments. Groups of these rock formations are parts of sedimentary sequences that preserve the fossil record.
Each of the sequences rests on the eroded surface on top of a previous sequence represented by a major regional unconformity (also called a sequence boundary, as illustrated in Figure 2-13). Six major sequences (with their underlying unconformities) are recognized throughout North America, with equivalent sequences and sequence boundaries on other continents. Each sequence represents a major marine advance (a transgressions) of shallow seas , replacing coastal plains and terrestrial environment. The major unconformities represent periods of regression (when the seas withdrew and coastal and terrestrial environmental setting replaced shallow marine environments).
Each of the sequences preserve fossils and evidence of biological activity that occurred when the sediments were deposited and preserved. Erosion through time has stripped away these deposits in many regions, but portions of each sequence are still still preserved and exposed in different parts of the continents. For example, part of four of the great sequences are exposed in the Grand Canyon (Figure 2-14). Sedimentary rocks bearing fossils from all geologic-time periods have been identified in locations scattered around the world.
Fig. 2-12.Major sedimentary sequences of North America preserve some of the evidence of the fossil record (after Sloss, 1931). The Sauk Sequence is the oldest containing shell fossils of the Cambrian Period. Each sequence represents a major advance (transgression) of shallow seas and coastal environments. Major unconformities represent periods of regression (dominated by erosion when the seas withdrew).
Fig. 2-13.Paleozoic-age sedimentary sequences exposed in the Grand Canyon, Arizona include portions of the Sauk, Tippecanoe, Kaskaskia, and Absarokasequences (shown in Figure 2-13). Each sequence is bounded (above and below) by unconformities. The oldest sequence bearing an abundance of fossils is the Sauk Sequence that rests on top of the Great Unconformity, an erosional boundary between rocks of Precambrian and Cambrian age exposed in the deepest parts of the Grand Canyon.
2.9
Ecological Succession: How Species and Ecosystem Populations Change Over Time
Studies of the fossil record show that extinctions in Earth's history vary from a disappearance of a species (an extinction), to the disappearance of entire lineages and populations within regional communities or globally (a mass extinction). Paleontologist have scoured outcrop areas and made extensive collection of fossils. Their investigations have revealed information about the appearance, changes, and extinction of many species. In many cases, they have made detailed analysis of fossil population and distributions across a region where rock layers of a particular age are preservedâone example involves extensive in sedimentary rock formations like the Triassic-age Chinle Formation in Painted Desert region of Arizona that contain an abundance of well preserved fossils (Figure 2-14).
The changes in species structure of an ecological community over time is called ecological succession. Ecological succession takes place on time scales ranging from decades (such as what happens to forest community after a massive wildfire or catastrophic superstorm) or even millions of years during an ice age or a mass extinction event. Figure 2-15 shows an interpretation of the changes in the species populations in within an ancient ecosystem over time as revealed by fossils preserved is successive layers of sedimentary strata. Changes in ancient species populations and ecosystems can be inferred from the abundance the fossil preserved (or missing), the character of the fossils themselves, and sometimes information can be inferred from the sediments surrounding fossils or trace fossils in the sedimentary layers investigated in a study area. Studies show that species appear, populations grow, and then decline and vanish, sometimes returning, or are often replaced by other species that either have out-competed them, or simply replaced them when climate changes or other processes occurred that changed an ecosystem community setting over time.
Fig. 2-14. Outcrop area of the Triassic-age Chinle Formation in the Painted Desert, Arizona is an example of an ideal study area that has an abundance of fossils preserved in many layers of strata over a large region. Layers of strata that contain fossils are commonly called fossil beds.
Fig. 2-15.Population changes in a local ecosystem over time (showing population curves for select species and total population of all species observed preserved in fossil beds). Interpretations like this may be made from exhaustive studies of fossil collections from an area like in Figure 2-14.
2.10
Geologic History and Biological Evolution
The following sections of this chapter is a review of major geologic events, biological evolution, and selected important concepts related to Earth history, starting with the most ancient events and appearance of life forms in the fossil record and leading to the Present.
2.11
Precambrian Eon
Precambrian is the general name for the geologic time period between when the Earth formed in the Solar System (in Hadean Time about 4.56 billion years ago) and the beginning of Phanerozoic Eon (about 540 million years ago). The oldest rock on Earth are Precambrian age. The Precambrian is subdivided into three Eons:
⢠Hadean Eon (before about 4 billion years ago)
⢠Archean Eon (between about 4.0 and 2.5 billion years ago)
⢠Proterozoic Eon (between about 2.5 billion and 540 million years ago).
The Precambrian encompassed all of early Earth history and rocks from that time preserve evidence of the evolution of life forms on a microbial level. In biology, cell theory states that a cell is the fundamental structural and functional unit of living matter, and that the an organism a multicellular body composed of autonomous cells with its properties being the sum of those of its cells.
Multicellular organisms (animals and plants) do not appear in the fossil record until late in Precambrian (Late Proterozoic) time.
The Phanerozoic
Eon began after the end of the Proterozoic Eon about 540 million years ago, and marks the change when fossil remains of multicellular organisms began to appear in great abundance in the fossil record (discussed below).
Geologic Time
Highlights of Biological Evolution
Fig. 2-16. Current thought is that the Moon formed from the debris created by the collision of a small planet-sized object with the ancestral Earth (or Proto Earth) early in the history of the Solar System about 4.5 billion years ago.
P
R
E
C
A
M
B
R
I
A
N
About 4.56 billion years ago
Formation of Earth and Moon within the Solar System nebula (Figure 2-16). (This is discussed in detail in Chapter 1). This event occurred before life is known to evolve on Earth.
About 4 billion years ago
Evidence of earliest cell-based life of Earth (prokaryotes).
About 3 billion years ago
Evidence of photosynthesis and first eukaryotic cells capable of oxygen-based respiration.
About 3.0 to 1.8 billion years ago
World-wide deposition of banded-iron formations fundamental to the gradual conversion of Earth atmosphere rich in carbon dioxide (CO2) to oxygen(O2) (discussed below). This conversion took nearly a billion years. Once there was enough free oxygen in the atmosphere, this allowed the development of an ozone layer to protect Earth from deadly solar ultraviolet radiation (UV). UV destroys many organic organic compounds. Without an ozone layer, intense solar UV probably would have killed life in the shallow ocean waters.
About 1.8 billion years ago
Sexual reproduction fully established in eukaryotes. Sexual reproduction increased the rate of mutation in species, leading to increased biodiversity.
About 1 billion years ago
Earliest evidence of multicellular organisms (metazoans). Early multicellular organism were very primitive but diversified very quickly through geologic time.
P
H
A
N
E
R
O
Z
O
I
C
E
O
N â¬ï¸
Cambrian Period
Beginning about 540 million years ago
The beginning of the Cambrian Period started a radiation of species preserved in the fossil record. This is, in-part, because many organisms began to develop the first hard skeletal material as part of defensive and functional body plans. The diversity of species preserved in Cambrian sediments is partly because soft-bodied organisms were not preserved in Precambrian-age sediments.
Significant changes happened in the global physical environment in Cambrian time.
Formation of the ozone layer created hospitable habitats and new space for organisms to move up and utilize shallow, warm sea environments that followed a major transgression onto the continents. Organisms were finally able to adapt to this new environment by allowing them to utilized calcite (CaCO3) for hard body parts are (shells and exoskeletons). Organisms with calcareous body parts were selectively or preferentially preserved in Cambrian and younger sedimentary rocks. The selective preservation of calcareous body parts has therefore made it easier to find evidence of life forms today preserved as fossils. Sediments composed of the skeletal remains of organisms (with shells and exoskeletons rich in CaCO3) is called lime, which turns into a sometimes quite fossiliferous rock, limestone.
2.12
Early Evidence of Life on a Global Scale
Banded-iron formations (BIFs) are sedimentary mineral deposits consisting of alternating beds of iron-rich minerals (mostly hematite) and silica-rich layers (chert or quartz) that formed about 3.0 to 1.8 billion years ago (Figure 2-17). Theory suggests BIFs are associated with the capture of free oxygen released by photosynthetic processes by iron dissolved in ancient ocean water. The ancient oceans were enriched in CO2 (just like the atmosphere). Iron easily dissolves in CO2-rich water â this is easy experiment to illustrate: drop and iron nail in a bottle of soda and it will dissolve completely in a few days! The early oceans must have been rich in iron (similar to salt is today)! Once nearly all the free iron dissolved in the ancient seawater was consumed, oxygen could gradually accumulate in the atmosphere. Once enough oxygen was free in the atmosphere, an ozone layer could form.
BIF deposits of Precambrian age are preserved in many locations around the world, occurring as massive and widespread deposits, hundreds to thousands of feet thick. The BIFs we see today are only remnants of what were probably every greater and more extensive deposits. During Precambrian time, BIF deposits probably extensively covered large parts of the ancient global ocean basins. Today, BIFs are the major source of the world's iron ore and are found preserved on all major continental shield regions.
Fig. 2-17. A sample of Precambrian banded-iron formation (BIF) from Fremont County, Wyoming.
2.13
Cell Theory in Evolution
Cell Theory dictates that all known living things are made up of one or more cells (the fundamental structure and functioning living unit in all living things. All living cells arise from pre-existing cells by processes involving cell division.
Cells are divided into two main classes: prokaryotic cells and eukaryotic cells.
Prokaryotic cells include (bacteria and related organisms). Prokaryoteslack anucleus (or nuclear envelope) and are generally smaller, structurally simpler, and less complex genomes (genetic material) than eukaryotic cells (Figure 2-17).
Eukaryotic cells contain cytoplasmicorganelles or a cytoskeleton, and contain a nucleus in which the genetic material is separated from the cytoplasm. Eukaryotes include fungi, plants, animals, and some unicellular organisms. Eukaryotic cells are capable of sexual reproduction (Figure 2-18).
The oldest known prokaryote fossils are about 3.5 billion years old.
The oldest known eukaryote fossils are about 1.5 billion years old.
The same basic molecular processes are involved in the lives of both prokaryotes and eukaryotes, suggesting that all present-day cells are descendant from a single primordial ancestor.
Endosymbiosis is a theory that suggests organelles evolved in eukaryotic cells and occurred when one type of cell became incorporated into another type of cell, creating a symbiotic relationship to the benefit of both (such as chloroplasts in plants, and mitochondria in animals).
Viruses are non-living organic structures capable of genetic self replication that are not classified as cells and are neither unicellular nor multicellular organisms; viruses lack a metabolic system and are dependent on the host cells that they infect to reproduce. Viruses likely have influenced evolution on a cellular level in Precambrian time, just as they impact species evolution today.
A stromatolite is a mound of calcareous sediment built up of layers of lime-secreting cyanobacteria (blue-green bacteria, algae and other more primitive eukaryotic life forms) that trap sediment, creating layers accumulations (Figure 2-19). Stromatolites are found in Precambrian rocks and represent some of the earliest known fossils. Stromatolites are known from all geologic time periods and are still occurring today, with exceptional examples resembling ancient life forms still being formed today in places like Shark Bay, Australia (Figure 2-20).
Life in Late Precambrian Time (Late Proterozoic Eon)
Evidence of the first sexual reproduction appear in the fossil record about 1.2 billion years ago. Many eukaryotic organisms including protista (both unicellular and colonial forms), fungi, and multicellular organisms (including plants and animals) reproduce sexually.
Metazoans are multicellular animals that have cells that differentiate into tissues and organs and usually have a digestive cavity and nervous system. Metazoans appeared on Earth in Late Precambrian time (Late Proterozoic Eon) consisting of cells that with growth would differentiated into unique tissue or organs used for special purposes, such a locomotion, feeding, reproduction, respiration, tissues able to sense the environment, etc.
Late Precambrian life forms have been discovered, but fossils from this period are scarce and poorly preserved because they did not contain hard parts (skeletons, teeth, etc.). Impression is sediments are dominantly trace fossils (tracks, trails, resting and feeding traces) and rare body impressions have been found.
A group of ancient fossil organisms called the Ediacaran fauna is one of the earliest known occurrence of multicellular animals is the fossil record. They were named for the Ediacaran Hills of South Australia where they were first discovered. Traces of Ediacaran fauna has been found worldwide in sedimentary rocks of about 635 to 541 million years (very late Precambrian age) and consisted of frond- and tube-shaped, soft-body organisms, mostly sessile life forms (sessile meaning attached to the seabed). Many of the fossils from this time period share similar characteristics of some families or classes of organisms still found on Earth today (including segmented worms, jellyfish, chordates, and other invertebrates).
Fig. 2-18. Cell structures of Prokaryotes and Eukaryotes
Fig. 2-19.Stromatolites, fossils of cyanobacterial algae mats, occur in rocks dating back to early Precambrian time, but can still be found living in some aquatic environments today.
Fig. 2-20.Stromatolite of Shark Bay, Australia, are modern living examples of stromatolites that resemble fossils from the Precambrian Eon.
2.14
The Paleozoic Era
The Paleozoic is the era of geologic time spanning about 541 to 248 million years ago. Paleozoic means ancient life (even though evidence of microbial life extends well back in time to some of the earliest sedimentary rocks still preserved and discovered on Earth). The Paleozoic Era follows the Precambrian Eon and precedes the Mesozoic Era. The term Paleozoic is used to describe the age of rocks that formed and accumulated in that time period. Highlights include:
⢠Dominant large animals: Invertebrates dominate early; fish and amphibians appear in the middle Paleozoic, and reptiles appear even later.
⢠Continents were mostly clustered together throughout the Paleozoic Era.
⢠Large, warm, clear, shallow seas covered large portions of continents.
⢠Similar animal and plant species existed on each continent.
⢠Continents were mostly low with little relief. Few large mountain ranges existed on and around most continental landmasses (compared with today).
⢠The combined Appalachians and Atlas Mountains formed 350 to 400 MYA (between what was North America and Africa before the opening of the Atlantic Ocean basin).
2.15
Highlights of the Early Paleozoic Era
Evolution of early plant and animal life (dominated mostly marineinvertebrates) is revealed in the fossil record of the early part Paleozoic Era. Primitive land plants, insects, and the first vertebrates also appear.
Cambrian Period (540-485 million years)
The Cambrian Period is the oldest of the named geological periods of the Paleozoic Era. At the beginning of the Cambrian Period the combination of tectonic forces and erosion of the landscape allowed shallow seas to gradually cover much of North America. Shallow seas covered most of what is now the Great Basin, Rocky Mountains, and Great Plains in the west, and much of East Coast, Appalachian region, and most of the Midwest. The shallow seas withdrew at the end of Cambrian time, but what was left behind was a blanket of Cambrian sedimentary rocks, collectively called the Sauk Sequence (see Figures 2-21 and 2-22, also see Figure 2-12). The base of the Sauk Sequence rests on an eroded surface of ancient Precambrian-age (mostly metamorphic and igneous rocks of the core of more ancient mountain systems). This sequence boundary is called the Great Unconformity. The Great Unconformity is exposed in many places throughout the western United States, and is particularly well known from exposures along the base sedimentary rocks of Cambrian age exposed in the Grand Canyon within the canyons Inner Gorge (see Figure 2-13). The Great Unconformity can be traced across most of North America wherever the base of the Cambrian-age Sauk Sequence is exposed.
Calcareous
skeletal shell remains first appear in the Cambrian Period.
The termCambrian explosion refers to evidence in the fossil record which shows that all major phyla were established in the transition from latest Precambrian to the Early Cambrian Period (about 700 to 541 million years ago) (Figure 2-23). The cause of this radiation from earlier metazoan life forms is uncertain, but it may have been driven by global climate changes (hot to cold cycles) and the establishment of unique habitats (niches) which allowed species to evolve separately from common ancestors. In Cambrian time, escalation of predator-prey relationships and increased competition appears to have driven rapid evolution of new species (along with extinctions). In Cambrian time, shelled organisms first appear in abundance in sedimentary deposits preserved from that time period. The fossil record from Cambrian time show that organisms with chitonous and calcareousshells and exoskeletons appeared and diversified. Many Cambrian-age organisms have eyes, legs (or pods), spinal chord-like features, segmented body plans, and other unique body parts and characteristics. Representatives of all phyla from the Cambrian Explosion still exist in the world today (Figure 2-23). Sedimentary rocks from Cambrian Period are typically rich in evidence of life activity. They preserve an abundance of bioturbation features (also called trace fossils) even if the life forms that created them are not preserved (an example of bioturbation [trace fossils] is shown in Figure 2-24).
Invertebrates dominate the fossil record in the early Paleozoic Era. An invertebrate is an animal lacking a backbone (spinal column or spinal chord), such as an arthropod, mollusk, annelid worm, coelenterate, echinoderm, and many others. The classification of invertebrates constitute a division of the animal kingdom, comprising about 95 percent of all animal species and about 30 different known phyla.
By the end of the Cambrian Period several groups of invertebrates were well established in shallow marine environments, perhaps most notably were trilobites, brachiopods, crinoids, bryozoans, sponges, and gastropods (snails) are locally common fossils preserved in Cambrian sedimentary rocks (Figures 2-25 and 2-26). At the end of the Cambrian Period, sea level fell and a long period of exposure and erosion occurred throughout North America and the other continents worldwide.
Fig. 2-21. The Great Unconformity is an erosional boundary at the base of the Sauk Sequence throughout much North America. This view is in Wind River Canyon, Wyoming.
Fig. 2-23. The Cambrian explosion refers to the diversification of life forms that began near the end of the Precambrian Eon.
Fig. 2-25.Trilobites are common shelled fossils in sedimentary rocks from the Cambrian Period.
Fig. 2-22. The fossiliferous Bright Angel Shale of Cambrian age is one of the rock formations of the Sauk Sequence exposed throughout the Grand Canyon region.
Fig. 2-24.Invertebrate tracks and trails appear in abundance in Cambrian-age sediments (Tapeates Sandstone) in the lower Grand Canyon in Arizona.
Ordovician Period (485-444 million years)
Shallow seas once again flooded across much of North America through much of the Ordovician Period. Deposition of sediments during this marine transgression resulted in the Tippecanoe Sequence which rests unconformably on top of the Sauk Sequence (see Figure 2-13). However, when sea level rose again (millions of years later) and shallow seas returned to cover large portions of the continents, communities of life forms in the oceans had significantly changed.
Trilobites no longer dominated the fossil record, but other life forms began to proliferate in warm, shallow marine environments. Communities similar to some modern reef-like settings appear in the fossil record. Corals (unrelated to modern varieties), crinoids, cephalopods, brachiopods, bryozoans and other fossil life forms with calcareous skeletons dominate the fossil record. Their abundance reflects their ability to live, proliferate, and upon death, survived burial and fossilization processes). Rare early examples of jaw-less, armored fish and land plants have been discovered in sediment deposits of Ordovician age. Sedimentary rocks of Ordovician age crop out in many locations around the country, but they are perhaps best known from the Cincinnati Arch region (of Ohio, Kentucky and Indiana) where a great abundance of well preserved fossils occur in strata preserved from that time period (Figures 2-27 to 2-29).
Fig. 2-27. Fossil-rich sedimentary rocks of the Tippecanoe Sequence are perhaps most famous from the Cincinnati Arch region.
Fig. 2-29. Common fossils of the Ordovician Period include brachiopods (a,b), cephalopods (c,d), trilobites (e), and crinoids (f).
2.17
Silurian Period (444-419 million years)
Few rocks of Silurian age are preserved in North America's fossil record (they are either not preserved or are not exposed at the surface). Some sedimentary rocks of Silurian age are preserved in upstate New York, around the Cincinnati Arch, and around the margins of the Michigan Basin are notable exceptions. Large fossil pinnacle reefs occur around the margins of an ancient sea basins that covered what is now the state of Michigan. The fossil record shows that the Silurian world was dominated by marine invertebrates, but the first fish-likechordates appear. Simple and primitive forms of land plants began to flourish and diversify during Silurian time. Plants on land became a food source allowing the first animals to emerge onto dry land (including early insects,arachnids and centipedes, and scorpions)(Figure 2-30). The first jawed fishes and freshwater fishes appear in Silurian. Large marine, scorpion-like creatures called eurypterids grew up to nearly 7 feet long (much larger than anything like it that exists like them today). Early vascular plants evolved in the Silurian Period, setting the evolutionary stage for terrestrial swamp and forest ecosystems that followed in geologic time.
Fig. 2-30. Common and unusual fossils of the Silurian Period.
2.18
Highlights of the Middle and Late Paleozoic Era
The Middle to Late Paleozoic Era is highlighted by the development of forest ecosystems and the development of vertebrate species on land, and rise of large fish in the oceans.
Devonian Period (419 to 359 million years)
On land, free-sporing vascular plants adapted and spread across the landscape, allowing the first forests to cover the continents. By the middle of the Devonian several groups of plants had evolved leaves and true roots, and by the end of the period the first seed-bearing plants appeared. Terrestrial arthropods began to flourish. In the marine world, early ray-finned and lobe-finned bonyfish and sharks appear and flourish as revealed in the fossil record. The first coiled-shelled ammonoid mollusks appeared. Holdover families of marine invertebrates from earlier times persisted, including trilobites, brachiopods, cephalopods, and reef-forming tabulate and rugose corals flourished in shallow seas (Figure 2-31).
The current oil and gas boom in the United States is largely because of the frackingtechnologies used to extract petroleum from the tight (meaninglow permeability), black shales associated with organic-rich muddy sediments deposited in inland seas of Devonian and Mississippian age. These black shales underlie large regions of the Appalachians, the Mid continent, and northern Great Plains regions in the United States. These deposits are part of the Kaskaskia Sequence (see Figure 2-13).
Fig. 2-31. Devonian Period brachiopods and common fossils from Kentucky.
2.19
Carboniferous Period (359 to 299 million years ago)
The Carboniferous Period got its name from the abundance of coal deposits in rocks of Late Paleozoic age in Europe. In the United States, the Carboniferous Period is subdivided into the Mississippian Period and Pennsylvanian Period. An abundance of coal deposits of these ages also exist in eastern and central United States. During the Carboniferous the world was very different than today. The Earth's atmosphere was much thicker, having as much as 40% more oxygen and a more uniform global environment than exists today by some estimates.
Fig. 2-32. Mississippian Period marine invertebrate fossils from Pennsylvania.
Fig. 2-33. The massive Redwall Limestone of Marble Canyon and the Grand Canyon formed in the Mississippian Period.
Mississippian Period (359 to 323 million years ago)
Sedimentary rocks of Mississippian age in North America are dominated by marine sediments preserved as limestones rock formations when shallow, warm seas covered much of North America. Massive fossiliferous limestone rock formations of Mississippian age exposed throughout the Midcontinent (Mississippi Valley), and throughout the Appalachian and Rocky Mountain regions (Figure 2-32). For example, the Redwall Limestone in the Grand Canyon region is about 800 feet thick (Figure 2-33). Mississippian rocks throughout these regions are host to many cavern systems (such as Mammoth Cave in Kentucky). Mississippian rocks are part of the Kaskaskia Sequence (see Figure 13-16).
The southern Appalachian Mountains began to rise in Mississippian time, and terrestrial lowlands and coastal swamps began to replace shallow seas that covered much of the North American continent at that time. Coastal swamps along the margins of rising mountain ranges rising above the shallow seas began to support forests.
Amphibians became the dominant marginal-land vertebrates in Mississippian time. Amphibians require water to lay their eggs. This is true for amphibians found around the world. Today there are only freshwater and terrestrial species (there are no marine amphibian species).
2.20
Pennsylvanian Period (323 to 299 million years ago)
The Pennsylvanian Period is
named for the coal-bearing region in the Appalachian Plateau and Mountains region). Great coastal forests and swamplands covered large regions of North America and parts of Europe. Great coal deposits formed from extensive swamps that trapped organic sediments in locations around the world. Pennsylvanian rocks are perhaps best know for their coal-bearing basins in the Appalachians and Midwest regions (Figures 2-34 and 2-35). These ancient swamp forests deposits are mined as the source coal from rock formations in West Virginia, Kentucky, Pennsylvania, and Ohio, and other states.
Perhaps the greatest evolutionary innovation of the Carboniferous Period was the development of amniote egg which allowed lizard-like tetrapods to advance. Reptiles evolved and became the first totally terrestrial vertebrates, descendant from amphibian ancestors. With the abundance of vegetation on land, arthropods flourished, including species of insects that are much larger than any found on Earth today (Figure 2-31). In Pennsylvanian time, glaciation cycles in the Southern Hemisphere caused repetitious rise and fall in sea levels. The Appalachian and Ouachita Mountain systems also began to develop as ancient forms of the continents of Africa, South America, and North America began to collide with one-another.
It was during the Pennsylvanian Period that the world's continents assembled together to form the supercontinent of Pangaea (discussed in Chapter 4). The unconformable boundary between the Kaskaskia Sequence and the overlying Absaroka Sequence is the boundary between sedimentary rocks Mississippian and Pennsylvanian age (see Figure 2-13).The Absaroka Sequence includes sediments deposited during Pennsylvanian, Permian and Triassic Periods (see below).
Fig. 2-34. Pennsylvanian age coal-bearing basins in the eastern United States are part of the Absaroka Sequence.
Fig. 2-35. Reconstruction of a swamp forest of the Pennsylvanian Period.
2.21
Permian Period (299 to 252 million years)
The last period of the Paleozoic Era was a time of colossal changes. All the continents of the world had combined to form the supercontinent of Pangaea. In the fossil record, a group of tetrapods (lizard-like, four legged animals with backbones or spinal columns) called amniotes appeared, capable of living on dry land and producing terrestrially adapted eggs. All modern land species are descendant from a common ancestral group of amniotes. Reptiles adapted and flourished in the more arid conditions. Modern reptiles are descendant from Paleozoic-age tetrapods (Figure 2-37 and 2-38).
During the Permian, the expansive fern forests that existed during the Carboniferous disappeared, and vast desert regions spread over the North American continental interior. Seed-bearing conifers (gymnosperms) first appear in the Permian fossil record.
In Permian time, seawater began to flood the great rift valleys associated with the opening of the Atlantic Ocean basin and the separation of North America and South America. One arm of the sea flooded westward into an inland sea basin located in the West Texas and New Mexico region (Figure 2-39). Greatreef tracks developed in around this inland sea basin (called the Permian Basin). Eventually the Permian Basin (as it is called) completely filled in with massive accumulations of salts (gypsum and evaporite). Today some of the ancient limestone reef are exposed in the mountain ranges around parts of this oil-producing sedimentary basin (Figure 2-40).
The end of the Permian Period (and Paleozoic Era) is marked by the greatest mass extinction in Earth history.
Fig. 2-37. Modern reptiles (like this western fence lizard) are descendant from Permian tetrapods.
Fig. 2-39. Map of the Permian Reef complex in the Permian Basin of West Texas and southern New Mexico.
Fig. 2-38.Dimetrodon, a mammal-like reptile from the Permian Period on display at the Chicago Field Museum. Fig. 2-40. Permian limestone reef track exposed in Texas and New Mexico, such as in Guadalupe Mountains and Carlsbad Caverns National Parks.
2.22
Evidence of Large Mass Extinctions Preserved In the Fossil Record
Extinction is the state or process of a species, family, or larger group being or becoming extinct (ceasing to exist).
Extensive studies of microfossils in deep well cores extracted from around the world show that the appearance and disappearance (extinction) of species has happened continuously through geologic time, but the rate was not constant.
As climates and landscapes changed, new species evolved to fit ever changing ecological niches; older species fade away.
A mass extinction is an episode or event in earth history where large numbers of species vanish from the fossil record nearly simultaneously. The causes of mass extinctions are debated, but some are linked to possible global climate changes associated with asteroid impacts, massive volcanism episodes, onset of ice ages, or a combination of effects that affected environments globally. Many questions remain about the causes of the great mass extinctions (because they may shed light on what is happening or may happen to the world related to human activities impacting the modern environment).
Current estimates are that 90 percent of all species that have ever lived on Earth are now extinct. However, the rate of extinction has not been constant. Mass extinctions have occurred at least five times in the last 500 million years. With each mass extinction much as about 50 to 90 percent of previously existing species on Earth had disappeared in very short periods of geologic time (Figure 2-41). Some mass extinctions are associated with great catastrophes associated with massive asteroid impacts that disrupted or destroyed ecosystems around the world (Figure 2-42).
Fig. 2-41.Great mass extinction events in the fossil record (species diversity compared with the geologic time scale).
Fig. 2-42. An massive asteroid impact can ruin your day (and your species, and many others).
Fig. 2-43. A classic Far Side cartoon by Gary Larson about "the real reason dinosaurs became extinct."
The Permian/Triassic (P/T) Boundary ExtinctionâThe Greatest Of All Mass Extinctions
The greatest mass extinction event occur at the end of the Permian Period (about 252 million years ago). Most families of organisms that existed in the Paleozoic Era vanished at the end of the Permian Period. A 2008 report published by the Royal Society of London provided estimates that as much as 96 percent of marine species and about 70 percent of terrestrial vertebrates that existed in Late Permian time vanished during the end of the Permian extinction event. This occurred during the assembly and breakup of the supercontinent Pangaea. Great amounts of volcanism are known from that period associated with the rifting and opening of the Atlantic Ocean basin. However, other causes, such as glaciation, ocean circulation collapse, or possibly asteroid and comet impacts, extraterrestrial radiation events, and others have been pondered.
The problem with studying mass extinctions like the one associated with the Permian-Triassic boundary is that the world has significantly changed since that time. Bedrock of Permian and older age under all the world's ocean basins have be subducted back into the Earth's mantle or heavily altered by mountain-building processes. In addition, much of the sedimentary record associated with exposed land of that time were stripped away by erosion before sediments began to be deposited and preserved in Triassic Period. Whatever the cause, it took many millions of years after the P/T extinction event (or events) for the biodiversity of the planet to return to levels that existed before in the Late Paleozoic Era. When this biodiversity returned, the world was host to completely different varieties of species and ecological communities, many replacing or occupying the same life habits (niches) and environments occupied by organisms that disappeared before the P/T extinction.
Great extinction events created opportunity for new life-forms to emerge. For instance, dinosaurs and many other life forms appeared only after the mass extinction at the end of the Permian Period (about 252 million years ago). The same is true for when mammals replaced dinosaurs when they went extinct at the end of the Cretaceous Period.
Perhaps the most studied extinction event has been the Cretaceous-Tertiary Boundary where strong evidence suggests at least one asteroid collided with earth in the vicinity of the Yucatan Peninsula in Mexico (about 66 million years ago)(Figure 2-41, also see Figure 2-58 below). This extinction killed off the dinosaurs and many other families of organism that lived in the oceans and on land. However, the catastrophe made room for mammals and other groups of organisms to rapidly diversify and evolve. Unlike the P/T extinction which has limited exposure around the world from 252 million years ago, there are many locations world wide and on all continents and within sediments extracted from the sea floor that reveal information about what happened at the end of the Cretaceous Period about 66 to 65 million years ago (discussed below).
2.23
Are humans causing a sixth great mass extinction?
Many scientists believe evidence suggests that another mass extinction is currently under way. Global climate change, the growth of the human population, and the expansion of human activity into previously wild habitats are largely to blame. Some estimates suggest that human activities such as land clearing (for agriculture), pollution, mining, urban development, and over fishing may drive more than half of the world's marine and land species to extinction possibly within the next century. This extinction event perhaps began during the end of the last ice age when humans spread around the globe and their populations expanded when the global climate was drastically changing. Many species of large land animals and birds have vanished in the past 10,000 years, but the rate of changes has drastically increased in the past 100 years with the tremendous expansion in the global human population.
2.24
Mesozoic Era
The Mesozoic Era is the era between the Paleozoic and Cenozoic Eras, comprising the Triassic, Jurassic, and CretaceousPeriods. The Mesozoic Era is commonly referred to as the Age of Reptiles. Highlights of the Mesozoic Era include:
⢠Dominant large animals: Reptiles and dinosaurs; birds and mammals appear.
⢠Increased mountain building occurred in many regions around the globe, and with that, lots of sediments were generated from erosion.
⢠The ancient supercontinent, Pangaea, begins to breakup at about 200 million years ago (Pangaea is discussed in Chapter 4).
⢠With the breakup of Pangaea, continents began moving apart. This caused isolation of species and communities, and as a result, created more diversity in plant and animal species through divergent evolution.
⢠The ancestral Rocky Mountains and Cordilleran Ranges formed in western North America between about 120 to 66 million years ago.
2.25
Triassic Period (252 to 201 million years)
Following the great extinction event at the end of the Permian Period, life on Earth gradually reestablished itself both on land and in the oceans through succession. Scleractinians (modern corals) replaced earlier forms as dominant reef-forming organisms. On land, reptilian therapsids (an order related to the distant ancestors of mammals) and archosaurs (ancestors of dinosaurs and modern crocodillians) became the dominant vertebrates. New groups evolved in the middle to late Triassic Period including the first dinosaurs, primitive mammals, and flying vertebrates (pterosaurs) but these families did not flourish until after another global extinction event at the close of Triassic time. Current thought is that ancestral forms of both mammals and dinosaurs first appear in the fossil record in Late Triassic time, about 200 million years ago (Figures 2-44 to 2-46). Petrified Forest National Park displays an abundance fossil plant and animal fossil associated with coastal swamp environments that were preserved in ancient volcanic ash beds. They are now exposed in the Painted Desert region of eastern Arizona and New Mexico, including in Petrified Forest National Park (Figure 2-47).
During the middle Triassic, the supercontinent of Pangaea began to rift apart into separate landmasses, Laurasia to the north and Gondwanaland to the south. With the breakup of Pangaea, terrestrial climates gradually changed from being mostly hot and dry to more humid condition. Another mass extinction in the fossil record marks the end of the Triassic Period.
Red beds are oxidized, iron-rich sedimentary deposits that occur extensively throughout western North America that were deposited in coastal terrestrial and nearshore environments during the Triassic Period (example in Figure 2-48). Red beds of Triassic age are well exposed in west Texas, throughout the Colorado Plateau and Rocky Mountain region, and in the Newark and Connecticut Basins on the East Coast. These are associated with the Absaroka Sequence that accumulated while Pangaea was still assembled and hot and dry climate conditions prevailed across most of North America.
Fig. 2-48.Red beds of the Chugwater Group of formations of Triassic age exposed near Lander, Wyoming.
Fig. 2-46.Placerias, a large mammal-like reptile from the Triassic Period from Petrified Forest National Park, Arizona.
Fig. 2-45.Desmatosuchus, an archosaur from the Triassic Period found in West Texas
Fig. 2-47. Extensive coniferous forests covered coastal regions at illustrated by the massive deposit of fossil wood preserved in Triassic-age sedimentary rocks in and around Petrified Forest National Park, Arizona.
2.26
Jurassic Period (201 to 145 million years)
The cause of the mass extinction at the end of Triassic is still unclear, but evidence shows that it was associated with rapid and massive amounts of volcanism that was taking place with the breakup of Pangaea (created by the opening of the Atlantic Ocean basin as North and South America gradually split away from the African and European continents).
With other life forms out of the way, dinosaurs adapted and diversified into a wide variety of groups. Although pterosaurs were the dominant flying vertebrates during the Jurassic Periods, the first birds appearedâhaving evolved from a branch of theropod dinosaurs (Figures 2-49 to 2-51). Rare small mammals occur in the fossil record during the Jurassic Period, but remained insignificant compared to the dinosaurs that dominated the landscape. Large marine reptiles including ichthyosaurs and plesiosaurs dominated the oceans.
Sedimentary rocks of the Zuni Sequence are well preserved and throughout the Colorado Plateau region. During the late Jurassic Period a great coastal sand desert covered much of the western part of the continental United States. This ancient sand desert would rival the large deserts of the Sahara or Arabian Peninsula that exist today. Through time, the desert conditions gave way to more humid coastal conditions with river systems and coastal swamplands (home to a variety of dinosaurs of the Jurassic and following Cretaceous Periods). The massive white cliffs of Zion National Park preserve evidence of this great sand desert in the western United States (Figure 15-53).
Fig. 2-53. Massive sandstone cliffs of the Navajo Sandstone of Jurassic age are well exposed in Zion National Park, Utah.
Cretaceous Period (145 to 66 million years)
During the Cretaceous Period the Earth was relatively warm compared to the world today. There were no glaciers on the planet and sea level was as much as 200 feet higher that today. Fossils of warm-water organisms are found in rocks that are arctic regions today. The dinosaurs that survived into the Cretaceous Period diversified and evolved into many unusual forms. Large marine reptiles called Mosasaurs were the dominant organism in the ocean. Sediments deposited in shallow sea flooding onto the continents had an abundance of ammonitesâsquid-like organisms that had calcareous shells similar to modern nautilus species (Figure 2-54). Cretaceous gets its name for CretaâLatin for the word chalk. The shallow warm seas of the Cretaceous Period were locations where the calcareous skeletal remains of planktonic organisms called coccoliths accumulated, forming great accumulations of chalk, such as exposed in the Great White Cliffs of Dover, England (Figure 2-55). In many places in the equatorial realm oyster-like organism called rudists formed great reefs. Flowering plants also first appear in the fossil record, birds existed in Cretaceous time but were insignificant compared to flying non-avian pterosaurs. Some of the largest (and perhaps most familiar) dinosaurs appear in Late Cretaceous time (Figures 2-56 to 2-58). In contrast, small mammals first appear in abundance in the Cretaceous Period, but they were still generally insignificant compared with more dominant reptile and dinosaur species that existed around them.
During Late Cretaceous time, a large mountain range and volcanic arc developed along the western margin of North America as the Atlantic Ocean basin began to rapidly expand. The rising mountains in the west forced an isostatic down warping of the central part of the North American continent resulting in the accumulation of massive sedimentary rock formations of the Zuni Sequence, as illustrated in Figure 2-59. This down warping eventually allowed the shallow Western Interior Seaway to expand and flood across much of the region extending from Arctic Ocean in Alaska and Canada to the Texas Gulf Coast region and throughout the Great Plains and Colorado Plateau regions (Figure 2-60).
Sedimentary rocks of the Zuni Sequence are well preserved and throughout the Colorado Plateau region. During the late Jurassic Period a great coastal sand desert covered much of the western part of the continental United States. As Pangaea broke apart, a great volcanic arc system began to form along the western margin of North America (called the Cordilleran Range). At the same time, shallow seaways began to expand across the central North America Seaways eventually merging to form the ancient Western Interior Seaway. This ancient seaway extended from Texas to Alaska by Cretaceous time and covered what are now the Great Plains and Rocky Mountain regions of the United States and Canada (Figure 2-59).
Fig. 2-54. Late Cretaceous ammonites of the Western Interior Seaway - an ancient seaway that existed in the Great Plains and Rocky Mountain region during Cretaceous Time.
Fig. 2-57.Parasaurolophus- a Late Cretaceous dinosaur with a crested skull.
Fig. 2-55. Cretaceous-age chalk exposed in the White Cliffs of Dover, England.
Fig. 2-56. Triceratops, a Late Cretaceous herbivore dinosaur. Chicago Field Museum.
Fig. 2-58. Dinosaur Sue, a famous Tyrannosaurus rex fossil on display at the Chicago Field Museum. T. rex was a large carnivorous dinosaur of the Late Cretaceous Period.
2.28
The Cretaceous-Tertiary Boundary (or K/T Boundary) Extinction
The Cretaceous-Tertiary (K/T) boundary [or Cretaceous/Paleogene Boundary (K/P)] is associated with one of the most investigated mass extinction events. The age of the K/T boundary is currently estimated to be about 66 million years based on absolute dating methods. It is has been well investigated partly because it is the youngest of the large extinctions that totally changed the nature of life on Earth. It is also well exposed in many locations on land around the world and has been studied extensively in core samples from deep-sea drilling projects.
The K/T extinction event is believe to have been caused by a massive asteroid impact in the Yucatan region of Mexico, although other possible sites of large impacts are being considered. What is known is that all species of dinosaurs on land, and marine reptiles and ammonites in the marine realm vanished (Figures 2-61 and 2-62).
The massive asteroid impact and following shock waves, monstrous tsunamis, firestorms, ash clouds, toxic gas clouds, and global winter-like condition that followed caused ecosystem collapse and failure of the food chains and webs in both the oceans and on land.
It is important to note that all species that exist today are descendant of the limited number of species that survived the global catastrophe... small mammals, birds, invertebrates, reptiles, amphibians, fish and other surviving groups had evolutionary advantages that allowed them to survive. With the dinosaurs, pterosaurs, large swimming reptiles and other large animals of the Cretaceous Period out of the way, the surviving species proliferated and moved into empty and new niches that allowed them to prosper and diversify.
The K-T boundary occurred near the end of the Zuni Sequence Cycle when sea level also fell around the globe (see Figure 2-13). The In the following Cenozoic Era many changes continued to occur including the uplift of the Rocky Mountain region and the withdrawal and disappearance of shallow inland seas and great lakes that previously flooded the Western Interior region.
Fig. 2-60. Western Interior Seaway and locations of plausible asteroid impact sites around North America.
The person is pointing toward a zone of disrupted bedding that corresponds to the zone where many terrestrial and marine species vanished from the fossil record at the end of the Cretaceous Period.
Fig. 2-62. A layer of highly disrupted sediments corresponds with the mass extinction horizon associated in marine sediments located along the Cretaceous-Tertiary extinction boundary exposed in and around Badlands National Park, South Dakota.
2.29
Cenozoic Era
The Cenozoic is commonly referred to as the Age of Mammals. The Cenozoic Era began with the mass extinction event associated with the K/T Boundary (discussed above). Highlights of the Cenozoic Era include:
⢠Dominant large animals: Mammals. Mammals diversified, gradually replacing the niches held by dinosaurs wiped out by the K/T extinction.
⢠Mountain building continued, especially around the Pacific Ocean; the Himalayan Mountains, the Alps, and mountain ranges throughout southern Eurasia begin to form. The Rocky Mountains and Cordilleran Ranges in western North America continued to form.
⢠Lots of erosion of existing mountains fed sediments to coastal plains and ocean margin basins.
⢠The youngest Tejas Sequence began to accumulated in the early Cenozoic Era and continues to the present day, forming the Atlantic and Gulf Coast regions.
The Cenozoic Era is generally divided into two (or three) periods:
Era
Period
Time Range
Cenozoic
Tertiary
Paleogene
66 million to 23 million years ago
Neogene
23 million to 2.6 million years ago
Quaternary
2.6 million years ago to the present
The older name, Tertiary Period, in now subdivided into two periods: Paleogene Period and Neogene Period.
The periods of the Cenozoic Era are also subdivided into time periods called epochs.
2.30
Paleogene Period (66 to 23 million years ago)
Period
Epoch
Notes
Time Range
Paleogene
Paleocene Epoch
The mass extinction at the end of the Cretaceous Period left many of the niches filled by dinosaurs and large swimming reptiles empty. Mammals with placental-typelive birth appear. Shallow seas of the Cretaceous period withdrew or were gradually replaced by lakes. In North America, the Rocky Mountains began to rise. See more about the Paleocene: American Museum of Natural History
66 to 56 million
Eocene Epoch
Modern-like forms of mammals appear and diversify in the fossil record during the Eocene Epoch. The Eocene was a warm period with an expanded tropical realm. The end of the Eocene period is marked by a mass extinction that may have involved asteroid collisions in Siberia and in the vicinity of Chesapeake Bay. See more about the Eocene: American Museum of Natural History
56 to 33.9 million
Oligocene Epoch
The Oligocene was a time of transition when older life forms were replace with life forms that dominate the world today. The warmer, more tropical environments of the Eocene Epoch gave way to dryer landscapes dominated by grasslands, whereas broad-leaf forests became more restricted to the equatorial realm. See more about the Oligocene: American Museum of Natural History
33.9 to 23.0 million
Figures 2-63 to 2-66 are selected examples of locations where Paleogene-age sedimentary rock formations are exposed and have been investigated in the United States. There are many other famous locations throughout the Atlantic and Gulf coastal plains and throughout the western United States.
Neogene Period (23 to 2.6 million years ago)
Animals and plants of the Miocene Epoch are approaching modern life forms in diversity and appearance. Earth was warmer with expanded tropical realms compared to the modern world. The Himalayan Mountains begin to rise as the Indian continental landmass began to collide with Asia. See more about the Miocene: American Museum of Natural History
23 to 5.3 million
Pliocene Epoch
Global climates cooled and became dryer with the onset of glaciation cycles. Most families of animals and plants found in the world had ancestral forms during the Pliocene, including humans. Greenland's ice sheet starts to form. South America and North America became linked at the Isthmus of Panama, allowing the cross migration of many species between continents; but also shutting off the migration of species from the Atlantic to the Pacific oceans. The same kind of interactions took place when Africa collided with Europe. See more about the Pliocene: American Museum of Natural History
5.3 to 2.6
million
Figures 2-67 to 2-70 are selected examples of locations where Neogene-age sedimentary rock formations are exposed and have been investigated in the United States. There are many other famous locations throughout the Atlantic and Gulf coastal plains and throughout the western United States.
Quaternary Period (2.6 million years ago to Present)
Time period of major ice ages where continental glaciation advance and retreated; glaciers covering much of northern North America and Europe during cold periods. Modern human species appears in the fossil record. Many species of large land mammals went extinct at the end of the Pleistocene Epoch. Learn more about the Pleistocene of California preserved in the La Brea Tar Pits, Los Angeles (UC Berkeley Museum of Paleontology website).
2.6 million to 11,000 years
Holocene Epoch
End of the Wisconsinian ice age to the present. Includes a 400 foot-rise in sea level and the rise of human civilizations. Humans rise to become the dominant species on Earth. Learn more about the Holocene: American Museum of Natural History
11,500 years
to present
Figures 2-71 to 2-74 are selected examples of locations where Quaternary-age sedimentary rock formations are exposed and have been investigated in the United States. There are many other famous locations throughout the Atlantic and Gulf coastal plains, in and around ancient lake basins throughout the western states, and throughout the glaciated regions of the Midwest, Great Lake and New England regions where continental glaciers once covered the landscapes.
Fig. 2-71. A thick sequence of coastal and nearshore deposits of Pleistocene age are exposed in the sea cliffs of Thornton State Beach south of San Francisco, California.
Fig. 2-73. Glacial till and outwash exposed at Caumsett State Park, Long Island, New York. Long Island is underlain by unconsolidated Pleistocene-age glacial deposits.
Fig. 2-74. Glacial moraine at Montauk Point on Long Island, New York is part of the southern terminal moraine of the Wisconsin glaciation at the end of the Pleistocene Epoch.
2.33
Evolution of Humans and the Rise of Modern Civilization
Some 15 to 20 different species of early human-like species (humanoids) are currently recognized. However, not all scientists studying human evolution agree how these species are related or how or why they died out. The majority of early human species left no living descendants. Scientists also debate over how to identify and classify particular species of early humans, and about what factors influenced the evolution and extinction of each species or sub-species.
Humans are included in the family of primates (which include modern monkeys, apes, and humans). Primates are descendant from an earlier monkey-like group called prosimians that appear in the fossil record in Eocene to Oligocene time. Primate species appear in abundance in many locations around the world during the Miocene Epoch (between 23 to 5.7 million years ago).
Fossils of earliest recorded human-like ancestors come from sediments of 6-7 million years ago in western Africa; the species had chimpanzee-sized brains and were able to walk upright on two legs.
Fossils of 6 to 3 million years recovered in eastern Africa (Ethiopia) show species with ape-like features that walked upright and lived in forested environments.
By 4 million years ago, early human species lived in near open areas in forested environments; bone structures show they were able to walk upright (bipedal) and still climb trees.
The famous Lucy skeleton (about 3 million years show species had ape-like proportions of face, brain case, strong arms [for climbing], but walked upright on arched feet.
The oldest stone tools have been found in sediments deposited 2.6 million years ago. Homo habilis (2.4-1.4 million years ago) species thought to represent the first stone toolmaker.
Multiple species of the genus Homo have been discovered from the time period of about 2 to 1 million years ago; some sharing the same environments.
Human use of fire began about 800,000 years ago. Evidence suggests fire was used for warmth, cooking, socializing, and safety from predators.
Homo erectus is known from ages about 1.89 million to 143,000 years ago, and fossils have been recovered from places as distant as eastern to southern Africa; western Asia (Republic of Georgia), China and Indonesia. The species used fire and ate meat, and evidence suggest that they took care of old and weak members of their clans.
A rapid increase in human brain size took place from 800,000 to 200,000 years ago, giving humans better survival skills the ability to adapt to changing environmental conditions (such as the onset of ice ages and interglacial warm and dry periods).
Our species,Homo sapiens, first appear in the fossil record about 200,000 years ago in Africa, but spread out into Europe and Asia by at 100,000 years ago (Figure 2-75). We now inhabit land everywhere on the planet and we are the sole surviving species of a once diverse group of ancestral family of human-like species. As human populations spread around the world, populations became isolated and developed characteristics associated with major races of humans that exist throughout the world today.
Climate change associated with the ice ages must have had significant impacts on the survival and extinction of human and human-like species. In addition, populations were impacted by massive volcanic episodes, such as the by the Toba Super Eruption in Sumatra that occurred about 75,000 years ago.
Although new discoveries are constantly being made, current though is that humans first came to Australia within the past 60,000 years and to the Americas within the past 30,000 years. Use of agriculture methods and the rise of the first civilizations developed within the past 12,000 years. As the human species has expanded, diversified, adapted, and populated. In contrast, many other species have already gone extinct due to human predation, isolation, and habitat destruction. The modern human population has benefited from advances in medicine, agriculture, and transportation. The world's population has doubled in the last 40 years, but the rate of population growth has declined by almost half in that time (but not enough to stop population growth)(Figure 2-76 to 2-77). However, this success is countered by the demands of land and resources that lead to war and conflicts between populations. Population growth is not evenly distributed around the world (Figure 2-78).
Fig. 2-75. Routes of human evolution and migration around the world beginning in late Pleistocene time.
Fig. 2-76. Within the past century, human activity has completely changed large regions of the planet's physical environment.
Fig. 2-77. World population growth 1600 to 2017 and rate of population growth 1950 to 2017 from United Nations data.
Fig. 2-78. World population density map of the world for 2015. Note that large populations have developed in regions of high agricultural productivity where water is abundant (and perhaps the most valuable resource to a region).
What Does It Mean To Be Human? (Ancestral Human Evolution, Adaptations, and Behavoir)
Check out the Smithsonian Institution's National Museum of Natural History website on human evolution (https://humanorigins.si.edu/). This is a comprehensive website that reviews human evolution research, evidence (including human fossils, tools, genetics, geochronology dating, and fact-based interpretations).
Refugia: How Life Goes On After Environmental Calamities
Even after any number of the great mass extinctions, life returned and flourished in abundance. Once the environmental calamity that caused the great mass extinction at the end of the Cretaceous Period ended, this allowed for the succession of living things from life forms that survived in place, or survived in refugia. A refugia is an area in which a populations of organisms can survived during an extended period of unfavorable conditions. Refugia are isolated or protected environmental setting that survive major climate changesâexamples include:
⢠an unglaciated area on a south-facing mountain slope where plants and animals survive in isolation, surrounded by advancing continental glaciers.
⢠species surviving an isolated mountain peak cool and wet enough to allow some species to survive when surrounding lowlands change from forests to desert conditions.
⢠plants and animals that become isolated on islands when sea level rises, and relative species elsewhere are wiped out by disease and/or predation.
⢠a an isolated community surviving in a canyon with continuous water supply in a region of long-term extended drought.
⢠species living in an isolated bay far away from the annihilation caused by a massive asteroid impact elsewhere on the planet.
Many question remain why some species survive a mass extinction event. What was it about species turtles, snakes, crocodillians, birds, and mammals that allowed them to survive the K/T extinction event when all dinosaurs and other organisms did not?
Refugia In Our Modern Era
With the advance of human civilizations, we are witnessing unprecedented extinctions as cities and croplands replace forests and coastal plains. Some species are hunted to extinction, or environmentally sensitive species loose their refugia. Human activities, such as building interstate highways and expansion of urban corridors, are isolating populations that would otherwise be a part of a continuous breeding population across an area or region. For some species, surviving member of species now only exist in zoos or on isolated park lands and wildlife preserves. On the other hand, useful species, such as dogs, cats, goats, cows, chickens, etc., are protected, but are increasingly being genetically modified to suit the needs and interests of their human hosts.
2.35
Evolution and Adaptation To Extremes
Adaptation is the driving force of evolution on many levels (microscopic to massive organisms; individual species to diverse communities). Environmental changes over time force species and communities (ecosystems) to adapt to special niches. Figure 2-79 shows the evolution and diversification of plants through geologic time. Some species able to spread across large regions by adapting to variable climate conditions that match their reproductive and feeding cycles. Ancient lineages that have survived extinction are often better adapted to living in harsh environments (such as lichens, mosses, and club mosses living in barren, rocky settings, Figure 2-80). Species like the Giant Sequoias that live in isolated communities in California's Sierra Nevada Range are remnant populations was once a much more widespread forest community that existed during the last ice age (Figure 2-81).
Organisms that have adapted to living in vernal pools illustrate adaptation to extreme environmental conditions. A vernal pool is a small pool or pond that forms temporarily, such as after a summer thunderstorm, seasonal precipitation (Figure 2-83). During a short period when water is present, a variety of species have adapted to completing their entire life cycle in a matter of days to weeks before the water dries up or becomes too salty. Amazingly, species like tadpole shrimp, fairy shrimp, and other desert species have adapted to these extreme environmental conditions. Tadpole shrimp have fossil ancestry dating back to marine environments in middle Paleozoic time. Tadpole shrimp have basically survived longer than any known species by being able to adapt to a variety of extreme environment conditions (Figure 2-84).
Fig. 2-79. Evolution involving competition and adaptations have led to a diversification of plants and mold through geologic time.
Fig, 2-80. Ancient lineages of early plants (such lichens, mosses, and club mosses) have adapted to harsh environments on rocky settings.
Fig. 2-81. Giant Sequoias (the world's largest trees) in Yosemite National Park, CA are adapted to local climate conditions.
Fig. 2-82. Vernal pools like this one form in after a desert summer thunderstorm. Within days, species such as tadpole shrimp hatch, feed on limited food supply, grow to adult size, reproduce (producing cysts and eggs, both sexually and asexually) before dying off when the water dries up, sometimes for many years between periods of precipitation.
Fig. 2-83. Tadpole shrimp are brachiopod crustaceans that appeared in the marine fossil record about 400 million years ago, but are only found today in vernal pool habitats. Their body plan has remained more or less consistent over the course of the past 250 million years. These species have adapted to survive some of the harshest climate extremes on Earth.
Modern Coral Reefs: Massive coral reef are found along the margin of continents and islands in tropical marine setting around the world (Figure 2-84). The Great Barrier Reef along the eastern coastline of Australia is perhaps the largest accumulation of biogenic material on the globe, formed from the accumulation of debris from rapidly growing coral communities. These reefs have grown, filling in coastal areas as sea level has risen nearly 400 feet to it present level from the peak of the last ice age, about 18,000 years ago. The big questions is how will they continue to grow and adapt with climate change?
2.36
The Anthropocene Epoch (1865 AD to present)?
The name Holocene Epoch has been applied to the time period extending from the end of the last ice age, encompassing the rise of human civilizations up to the present time. However, the name Anthropocene has been suggested to designate the current geological age, viewed as the period during which human activity has become the dominant influence on climate and the physical environment. Some question are: When did this happen? And, how will generations of consciously aware descendants of our times (human and otherwise) be able to recognize it from landforms and layers with sedimentary deposits? Many suggestions have been made, and deposits in one region may not completely match characteristics in another region. (This is an excellent discussion topic for examining other extinction boundaries in the geologic past!) Here are points to consider: when did the Anthropocene begin?
⢠Many scientists think the beginning of the Anthropocene began with the Industrial Revolution in the 1850s; the logical start starting point to the modern era. The start of the Industrial Revolution marks when major extraction of mineral resources began (coal, iron, and other metals), the spread transportation networks, the growth and expansion urban development (Figure 2-85).
⢠Durable pollen from eucalyptus trees imported from Australia and New Zealand to support expansion or the railroads start to appear in sediments throughout California sedimentary basin deposits starting in the 1850s.
⢠Mass production and distribution of durable glass, porcelain products, and lead bullets started in the 1850s, beginning the contribution to throw-away society materials that can be found in abundance wherever humans went. Durable man-made products began to accumulate as trash in the environment.
A later start to the Anthropocene Epoch is suggested for post World War II. Sediments from this period include:
⢠A universal boundary world-wide where radioactive isotopes and byproducts of the surface testing of nuclear weapons can now be identified as a boundary in sedimentary deposit around the world.
⢠Durable plastics, construction materials, porcelain tiles, composite materials, and other durable trash of the modern era released intentionally or accidentally (such as damaging effect caused by superstorm damage, tsunamis, floods, or other disasters) are now distributed throughout the environment.
⢠Construction of sprawling urban area, mining regions, transportation routes (such as interstate highways) , and agricultural activities have significantly modified the landscape in many regions that will have lasting effect on the landscape for many millennium into the future. Some estimates suggest that human activities are moving more materials than all the rivers, wind, ocean currents, and other natural geologic processes combined.
⢠Landfills will be a long-lasting time stamp on the landscape worldwide.
⢠Introduction of exotic species have completely changed the environment in many regions.
This discussion has many intriguing manifestations. Can humans organize and adjust to what might be considered sustainability? Or, perhaps without hope, are we destined to an apocalyptic fate as describe by Thomas Malthus (1766-1834), an English economist and demographer who proposed a theory that human population growth will always tend to outrun the food supply. Malthus suggested that the betterment of humankind is impossible without strict enforcement of limits on reproduction. So far in our modern era, it seems that some of the limitations on what might be considered sustainable have been addressed by advancing technology and changing social norms (globally). The question is, can we collectively achieve sustainability without enduring war, disease, and famine?
Fig. 2-85. The Washington Monument is a possibly a good choice for a type section for the Holocene/Anthropocene Boundary. The lower part of the monument was built (by slaves) before the Industrial Revolution began. The upper part of monument was completed in a second construction phase after the Civil War (by free men) after the Industrial Revolution was well in progress.
Interestingly, the H/A Boundary level depicted on the Washington Monument approximately marks the level that sea level will rise to if most of the ice on Greenland and Antarctica were to melt due to global warming (as has already occurred in the geologic past).
Fig. 2-86. A famous cartoon depicting human evolution. Many people agree that humans are greatly altering our global environment with potentially catastrophic consequences without drastic changes in how we use our planet's limited resources. We need to learn how to manage and sustain our world's natural resources and manage our populations in any way while avoiding catastrophe.
2.37
Concepts of evolution, refugia, and succession provide a valuable lesson about modern society.
In our life times we can witness the progress of evolution in many ways, and hopefully, learn. The advance of technology illustrates these concepts. Classic examples illustrate:
⢠cars and displacing or replacing trains and horse-drawn carts as primary means of transportation.
⢠cell phones replacing telephones, which replaced telegraphs and mail services as primary means of communication.
⢠cable television replacing radio/TV broadcasting.
⢠cities grow through succession following the changes in politics, industry, and development of infrastructure.
So, should calamity happen, and an area or region should loose electrical power or access to liquid fuels, what would survive? Populations would need to migrate, adapt, or face famine. Electric- and gas-power tools
and equipment would be rendered useless, but hand-powered tools like hammers, water pumps, shovels, saws and axes would be increasingly valuable!
In the business world, evolution provides particularly important concepts. It is an interesting study to see how businesses and corporations survive economic calamities caused by wars and depressions, and the rise of competing new technologies. It is a jungle out there.
2.38
Where are rocks of different geologic ages exposed in the United States?
Rocks of all geologic ages are exposed in different parts of the United States. Figure 2-87 is a geologic map of the conterminous United States, and Figure 2-88 is the geologic map legend that shows colors associated with regions where rocks of different ages are exposed at the surface. Earth scientists use geologic maps like these to locate areas where they may go study the fossil record where rocks of different ages (and the fossils they contain) occur. Each region of the country has unique fossil record. The best place to start an investigation is to visit museums, universities, and government organizations that host fossil and rock collections in the vicinity where rocks are exposed. Learn more about the regional geology and natural resources of the United States on this link: Regional Geology of the United States. | Large fossil pinnacle reefs occur around the margins of an ancient sea basins that covered what is now the state of Michigan. The fossil record shows that the Silurian world was dominated by marine invertebrates, but the first fish-likechordates appear. Simple and primitive forms of land plants began to flourish and diversify during Silurian time. Plants on land became a food source allowing the first animals to emerge onto dry land (including early insects,arachnids and centipedes, and scorpions)(Figure 2-30). The first jawed fishes and freshwater fishes appear in Silurian. Large marine, scorpion-like creatures called eurypterids grew up to nearly 7 feet long (much larger than anything like it that exists like them today). Early vascular plants evolved in the Silurian Period, setting the evolutionary stage for terrestrial swamp and forest ecosystems that followed in geologic time.
Fig. 2-30. Common and unusual fossils of the Silurian Period.
2.18
Highlights of the Middle and Late Paleozoic Era
The Middle to Late Paleozoic Era is highlighted by the development of forest ecosystems and the development of vertebrate species on land, and rise of large fish in the oceans.
Devonian Period (419 to 359 million years)
On land, free-sporing vascular plants adapted and spread across the landscape, allowing the first forests to cover the continents. By the middle of the Devonian several groups of plants had evolved leaves and true roots, and by the end of the period the first seed-bearing plants appeared. Terrestrial arthropods began to flourish. In the marine world, early ray-finned and lobe-finned bonyfish and sharks appear and flourish as revealed in the fossil record. The first coiled-shelled ammonoid mollusks appeared. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.haaretz.com/israel-news/2021-06-13/ty-article/who-really-destroyed-solomons-temple-in-jerusalem/0000017f-f2dc-dc28-a17f-feff98f00000 | Who Really Destroyed Solomon's Temple in Jerusalem? - Israel ... | As has been well-known for millennia, in either 587 or 586 B.C.E., the forces of Nebuchadnezzar II, king of Babylonia, served a deadly blow to the small and rebellious Kingdom of Judah. They wiped it off the map, deported large swathes of its population, and destroyed its holy temple, the Temple of Solomon.
Or not. This, says renowned biblical scholar Richard Elliott Friedman, a professor of Jewish Studies at the University of Georgia and author of the best-selling book “Who Wrote the Bible?” may have been a case of mistaken identity. The Babylonians may have destroyed Judah and kicked out its populace, but they did not destroy the temple. The culprits were the Edomites, a small kingdom in the southern Transjordan, he posits.
In a short article published in Academia, "The Destruction of the First Jerusalem Temple," Friedman suggests that the fall of Jerusalem and the destruction of the temple were two separate events, which a biblical scribe collapsed into one and thus led us all to misplace the blame.
At first glance this seems unlikely. The Hebrew Bible explicitly states no less than three times that the Babylonians burned down the Temple when they took the city:
“Nebuzaradan, captain of the guard, a servant of the king of Babylon, [came] unto Jerusalem, and he burnt the house of the Lord, and the king’s house, and all the houses of Jerusalem, and every great man’s house burnt he with fire” (2 Kings 25:8-9 KJV; and very similarly stated in Jeremiah 52:12-13 and 2 Chronicles 36:19).
But, Friedman argues, these accounts are likely erroneous. The Book of Jeremiah relates that a few months after the Babylonians took Jerusalem, Ishmael son of Nethaniah, the same man who killed the Babylonian-appointed governor of Judah Gedaliah, killed 80 men from Nablus and Shiloh “having their beards shaven, and their clothes rent, and having cut themselves, with offerings and incense in their hand, to bring them to the house of the Lord” (Jeremiah 41:5; KJV).
Layers of JerusalemCredit: Ariel David
How could the Babylonians have burnt down the Temple, if it was still standing and receiving offerings?
Friedman points out that while it is true that the above-mentioned passages state that the Babylonians destroyed the Temple upon capturing the city, a fourth account of these events does not say the Temple was destroyed when describing the capture of Jerusalem. In this fourth report, which was incorporated into or used as the base for the three others, the Babylonians “burned the king’s house, and the houses of the people, with fire, and brake down the walls of Jerusalem” (Jeremiah 39:8; KJV). Not a word about the Temple being destroyed.
Of course, the fact that the Temple was destroyed is an irrefutable historical fact. It is just that Friedman believes it took place a little later, in a separate event, and that when the historian who wrote the account that underlies the accounts in 2 Kings 25, Jeremiah 52, and 2 Chronicles 36, described the traumatic events of those years, he simply conflated the fall of Jerusalem and the burning of the Temple into one event.
Friedman argues that while the Babylonians did destroy much of Jerusalem when they occupied the city, the Temple remained intact and thus could still be a pilgrimage destination for the unfortunate victims of Ishmael son of Nethaniah. But shortly after, when exactly and under what circumstances he does not know, the Edomites came to Jerusalem and destroyed the Temple.
The children of Edom
Friedman’s evidence for this Edomite attack on the Temple is based on three passages:
* The ire of the Judean exiles towards the “the children of Edom” expressed in the famous “Rivers of Babylon” psalm, for calling out “Rase it, rase it, even to the foundation thereof” on the “day of Jerusalem” (Psalm 137:7; KJV);
* The prophet Obadiah’s tirade against the Edomites, in which he promises their complete annihilation by God for their “violence against thy brother Jacob” (Obadiah 1:10);
* And most explicitly, the leader of the Judean exiles, Zerubbabel’s words to King Darius of Persia: “You also vowed to build the temple, which the Edomites burned when Judea was laid waste by the Chaldeans [the Babylonians]” (1 Esdras 4:45; RSV).
Digging up early JerusalemCredit: Ariel David
This is a novel and intriguing reconstruction of the events, but is it true?
Probably not. Friedman acknowledges that each of the three textual “problems” he based his arguments on – the lack of mention of the Temple in Jeremiah 39, the pilgrimage to the supposedly already destroyed Temple in Jeremiah 41, and the mysterious anger at the Edomites in Obadiah, Psalm 137, and 1 Esdras 4 – have other solutions. However, he argues that because his solution solves all three together, rather than come up with a different solution for each problem, it is superior: “Three enigmas with a host of proposed solutions, or a single explanation for all three. We should favor the most parsimonious solution.”
Friedman’s solution may be parsimonious, but is it likely?
Perhaps we can believe that the Babylonians destroyed the palace and the houses of the people but left the Temple intact. But are we to believe that the author of Jeremiah 39 expressed this by simply mentioning what buildings they did destroy, without explicitly stating that they left the Temple standing? That seems like something he would have mentioned.
It is more likely that the text did originally mention the destruction of the Temple and that the text was simply corrupted in one of the many times it was copied. The most likely solution is that this is a case of haplology, a very common scribal error in which a copyist’s eye skips from one word to an identical word later in text and thus inadvertently erases the words in between.
In this case, the repeated word might be “the house”: “burned the house of [the Lord, the house of] the king’s, and the houses of the people, with fire, and brake down the walls of Jerusalem.” In the original Hebrew this error would only have caused seven letters to be lost.
Babylon, in Iraq, March 2021Credit: Hadi Mizban,AP
Another explanation is that what appears in the extant text as “the houses of the people” was originally “the house of the people” – that is what the ancient translator of the verse into Greek saw before him – and that “house of the people” was an otherwise unknown name of the Temple.
And say that indeed the Babylonians left the Temple standing, the author of Jeremiah 39 did not mention this fact explicitly, and the Temple was indeed destroyed later: are we to believe that the Author of II Kings 25 would have erroneously attributed the Temple’s destruction to the Babylonian Nebuzaradan, despite the fact that he must have lived only a short while after the events, considering that the last event he mentions in his history is the release of King Jeconiah from captivity (2 Kings 25:27-30) and not, say, the murder of Nebuchadnezzar II’s son and heir in 560 B.C.E or the fall of Babylonia altogether in 539 B.C.E?
And say the author of this narrative in 2 Kings did for some reason absolve the Edomites of their responsibility for the destruction of the Temple, how is it that no mention of this is recorded in the Hebrew Bible and we only learn of this in the very late and historically dubious 1 Esdras? If the author of Psalm 137 was angry at the Edomites for destroying the Temple, why would he not mention this crime, and instead just mention that they clamored for its destruction?
And if Obadiah was excoriating the Edomites for destroying the Temple, why did he not mention that they did this, instead accusing them of taking the side of the “strangers” and “foreigners” who “carried away captive his forces...entered into his gates, and cast lots upon Jerusalem… as one of them” (1:11), of rejoicing “over the children of Judah in the day of their destruction” and (1:12), of entering “into the gate of my people in the day of their calamity,” of looking “on their affliction in the day of their calamity,” of laying “hands on their substance in the day of their calamity” (1:13), and of standing “in the crossway to cut off those of his that did escape” (1:14)?
If indeed, the Edomites destroyed the Temple, these allegations seem quite petty. What Obadiah and Psalm 137 are accusing the Edomites of doing is not attacking Jerusalem and destroying its Temple; rather the Edomites are attacked for taking part in the destruction of Jerusalem as auxiliaries to the Babylonian army, of helping the Babylonian “strangers” rather than standing on the side of their “brothers.”
Reconstruction of Babylon's Ishtar Gate, at the Museum of the Ancient East in BerlinCredit: Markus Schreiber,AP
The House of the Lord
That the Edomites were vassals of the Babylonians and were required to provide soldiers to assist in the campaign against Judah is not only possible but plausible. And that the Judeans would have resented this betrayal greatly is certain.
In the end, what Friedman’s theory stands on is that story in Jeremiah 41 about the murder of the pilgrims on their way to “the house of the Lord.” There is nothing in this story to support its historicity and as it stands it seems that it was only intended to further blacken the reputation of Gedaliah’s murderer. Did Ishmael son of Nethaniah really kill 80 people for no apparent reason? Maybe? Were they actually on their way to the Temple? Who knows?
But even if we do think this story does prove that people went to present offerings at “the House of the Lord” after Jerusalem was taken by the Babylonians, there are very good explanations for this. Perhaps, after the destruction, people continued to present sacrifices at the site of the destroyed Temple? Or perhaps the “house of the Lord” in question wasn’t the temple in Jerusalem at all but rather a different temple, say the temple recently uncovered by archaeologists in Motza, just 10 kilometers (6.2 miles) from Jerusalem.
Either way, this story is not enough for us to simply overturn the clear and explicit report of 2 Kings 25 that the Babylonians did in fact destroy the Temple.
Asked what he thought of these difficulties, Friedman graciously responded at some length. In brief, he says that the evidence from silence drawn upon here, the lack of mention of the Edomite destruction of the Temple in Psalm 137 and Obadiah, is less convincing than the evidence of silence he drew on, the fact that Jeremiah 41 does not mention the destruction of the Temple, since the former is poetic speech and the latter is prose.
Poets and prophets, he explained, use “image and allusion” and don’t spell out the details of what they are writing about in the same way that writers of prose do. As for the unreliability of 1 Esdras, he does not think its lateness is a problem. The author of this book, he says, may have used ancient and historically accurate sources, which have not come down to us. He also rejects the possibility that the pilgrims in the Gedaliah story would have offered sacrifices on the site of the destroyed Temple, since this would be “a direct violation of the law in Deuteronomy and the dedication speech of Solomon in 1 Kings 8.” | As has been well-known for millennia, in either 587 or 586 B.C.E., the forces of Nebuchadnezzar II, king of Babylonia, served a deadly blow to the small and rebellious Kingdom of Judah. They wiped it off the map, deported large swathes of its population, and destroyed its holy temple, the Temple of Solomon.
Or not. This, says renowned biblical scholar Richard Elliott Friedman, a professor of Jewish Studies at the University of Georgia and author of the best-selling book “Who Wrote the Bible?” may have been a case of mistaken identity. The Babylonians may have destroyed Judah and kicked out its populace, but they did not destroy the temple. The culprits were the Edomites, a small kingdom in the southern Transjordan, he posits.
In a short article published in Academia, "The Destruction of the First Jerusalem Temple," Friedman suggests that the fall of Jerusalem and the destruction of the temple were two separate events, which a biblical scribe collapsed into one and thus led us all to misplace the blame.
At first glance this seems unlikely. The Hebrew Bible explicitly states no less than three times that the Babylonians burned down the Temple when they took the city:
“Nebuzaradan, captain of the guard, a servant of the king of Babylon, [came] unto Jerusalem, and he burnt the house of the Lord, and the king’s house, and all the houses of Jerusalem, and every great man’s house burnt he with fire” (2 Kings 25:8-9 KJV; and very similarly stated in Jeremiah 52:12-13 and 2 Chronicles 36:19).
But, Friedman argues, these accounts are likely erroneous. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.simonandschuster.com/books/The-Egyptian-Origins-of-King-David-and-the-Temple-of-Solomon/Ahmed-Osman/9781591433019 | The Egyptian Origins of King David and the Temple of Solomon ... | Table of Contents
About The Book
An investigation into the real historical figure of King David and the real location of the Temple of Solomon
• Identifies King David as Pharaoh Tuthmosis III of the 18th Dynasty and David’s son Solomon as Pharaoh Amenhotep, Tuthmosis’s successor
• Shows how the Temple of Solomon described in the Bible corresponds with the Mortuary Temple of Luxor in Egypt
• Explains how David was not a descendant of Isaac but his father and how biblical narrators changed the original story of Abraham and Isaac to hide his Egyptian identity
During the last two centuries, thousands of ancient documents from different sites in the Middle East have been uncovered. However, no archaeological discovery speaks of King David or Solomon, his son and successor, directly or in directly. Was King David a real person or a legend like King Arthur? Proposing that David was a genuine historical figure, Ahmed Osman explores how his identity may be radically different than what is described in religious texts.
Drawing on recent archaeological, historical, and biblical evidence from Egypt, Osman shows that David lived in Thebes, Egypt, rather than Jerusalem; that he lived five centuries earlier than previously thought, during the 15th rather than the 10th century B.C.; and that David was not a descendant of Isaac but was, in fact, Isaac’s father. The author also reveals David’s true Egyptian identity: Pharaoh Tuthmosis III of the 18th Dynasty.
Confirming evidence from rabbinic literature that indicates Isaac was not Abraham’s son, despite the version provided in Genesis, Osman demonstrates how biblical narrators replaced David with Abraham the Hebrew to hide the Egyptian identity of Isaac’s father. He shows how Egyptian historical and archaeological sources depict figures that match David’s and Solomon’s known characteristics in many ways, including accounts of a great empire between the Euphrates and the Nile that corresponds with David’s empire as described in the Bible. Extending his research further, the author shows that King Solomon, King David’s son, corresponds in reality to Pharaoh Amenhotep, successor of Tuthmosis III, the pharaoh who stands out in the dynastic history of Egypt not only for his peaceful reign but also as the builder of the Temple of Luxor and the famed Mortuary Temple at Luxor, which matches the biblical descriptions of Solomon’s Temple.
Unveiling the real history behind the biblical story of King David, Osman reveals that the great ancestor of the Israelites was, in fact, Egyptian.
Excerpt
Chapter 6
Sarah and Pharaoh
How did the account of Tuthmosis III and his Egyptian empire become part of the biblical sources used by the scribes for the story of David?
When we think of the Israelites’ connection with Egypt, we always talk about Joseph the Patriach, of the coat of many colors. It was he who brought Jacob, Israel, and his Hebrew tribe from Canaan to Egypt. Nevertheless, the Bible itself gives us an account of an earlier Hebrew contact with Egypt’s Pharaonic family, by Abraham and his wife Sarah. Abraham the Hebrew, who made his first appearance in history in the 15th century BC, has been regarded by Jews, Christians, and Muslims alike, as the founding father of the 12 tribes of Israel.
In this chapter, however, I am going to show that Abraham’s patriarchy is by no means actual: rather it is of symbolic importance.
Abram and his wife Sarai (to give them their original names found in the Bible) began their journey into history, according to the biblical account, at Ur. The party, led by Terah, Abraham’s father, also included Lot, Terah’s grand-son and Abram’s nephew. The Book of Genesis gives no explanation of the reasons which prompted Terah and his family to set out on the great trade route to Canaan. Nor, as is usual in the Bible, is there any indication of the date when this migratory journey began.
For anyone trying to make a living from the soil, the hills of Canaan posed an intimidating challenge. Times of famine were common--and it was at a time of famine that Abram and Sarai are said to have set out on their travels again from Haran, making their way south, a journey that was to forge the first links between this Hebrew Semitic tribe and the royal house of Egypt, and ensured for Abram’s family an enduring place in world history.
Compared with Canaan, Egypt was a rich and sophisticated country. Although Abram and Sarai are said by the Bible to have set out for Egypt at a time of famine, it may have been some other motive--trade, perhaps--that caused them to make the journey. Certainly, they did not stay in the Eastern Delta of the Nile--which one might have expected had they simply been seeking food--but made their way to wherever the Pharaoh of the time was holding court.
Wherever Abram and Sarai went, and for whatever purpose, we are simply told that Sarai was “a fair woman to look upon” and, as they approached Egypt, Abram, fearing that he might be killed if it were known that Sarai was his wife and Pharaoh took a fancy to her, said: “Say you are my sister, so that I will be treated well for your sake and my life will be spared because of you” (12:13). This, according the Book of Genesis, proved a wise precaution. Courtiers advised Pharaoh of the beautiful woman who appeared in their midst, and “she was taken into his palace. He treated Abram well for her sake, and Abram acquired sheep and cattle, male and female donkeys . . . But the Lord inflicted serious diseases on Pharaoh and his household because of Abram’s wife Sarai. So Pharaoh summoned Abram. ‘What have you done to me?’ he said. ‘Why didn’t you tell me she was your wife? Why did you say, She is my sister, so that I took her to be my wife? Now then, have your wife. Take her and go!’ Then Pharaoh gave orders about Abram to his men, and they sent him on his way, with his wife and everything he had.” (Genesis 12: 15-20).
Abram and Sarai were sent back to Canaan with generous gifts. Pharaoh also provided Sarai with an Egyptian maid, Hagar, and, after they had returned safely to Canaan, Sarai gave birth to a son, Isaac. The essence of the biblical account of the journey to Egypt is that Sarai, the wife of Abram, also became the wife of the ruling Pharaoh. This, in the custom of the time, would not only have involved the paying of the bride-price to Abram for the hand of his “sister,” but sexual intercourse on the same day as the actual marriage ceremony. The question therefore arises: Who was the real father of Isaac, Abram or Pharaoh?
The available evidence--the marriage; Abram’s pose as Sarai’s brother; Sarai being seen by princes of Pharaoh who commended her beauty; her being taken into the royal palace; the king’s marriage to her and his generous treatment of Abram (presents of sheep, oxen, etc.); the gift to Sarai of the maid Hagar; the elaborate efforts of the biblical narrator to put as many years as possible between the couple’s return to Canaan and Isaac’s birth; textual reference in the Talmud (the most important work of religious law in post-biblical Judaism), regarded as next in authority to the Old Testament in its account of the early history of the Israelites, and in the Qur'an, sacred book of Islam; the history of Isaac’s immediate descendants--points to the Pharaoh, not Abram, as Isaac’s father.
The efforts of the biblical narrator to disguise the truth about Isaac’s parenthood have, I believe, historical roots that go beyond the fact that he was the son of a second, “sinful” marriage. In the course of the years that followed, the Israelites were to return to their ancestor’s land in Egypt, where they remained for four generations until the Exodus when, burdened by harsh treatment and persecution by their Egyptian taskmasters, they were led out of the country by Moses on the first stage of their journey to the Promised Land, back in Canaan. Many more centuries passed before an account of these events was put down in writing, by which time Egypt and its Pharaoh had become a symbol of hatred for the Israelites. The biblical narrator was therefore at pains to conceal any family connections between Israel and Egypt.
“No one since Sigmund Freud has done more to show the connection between ancient Egypt’s Amarna period and the biblical stories of Joseph, Moses, and the Exodus. Ahmed Osman now provides compelling new evidence showing the true roots behind the establishment of the kingdom of Israel and the building of the Temple of Solomon.”
– Andrew Collins, author of The Cygnus Key and Göbekli Tepe:Genesis of the Gods
“Ahmed Osman has discovered an intriguing back door into biblical history. Walking the tightrope between skeptical archaeologists and true believers of the Bible, the author asks a compelling question: Did Hebrew scribes attribute the military victories of an Egyptian pharaoh to David, the famous slayer of Goliath?” | Table of Contents
About The Book
An investigation into the real historical figure of King David and the real location of the Temple of Solomon
• Identifies King David as Pharaoh Tuthmosis III of the 18th Dynasty and David’s son Solomon as Pharaoh Amenhotep, Tuthmosis’s successor
• Shows how the Temple of Solomon described in the Bible corresponds with the Mortuary Temple of Luxor in Egypt
• Explains how David was not a descendant of Isaac but his father and how biblical narrators changed the original story of Abraham and Isaac to hide his Egyptian identity
During the last two centuries, thousands of ancient documents from different sites in the Middle East have been uncovered. However, no archaeological discovery speaks of King David or Solomon, his son and successor, directly or in directly. Was King David a real person or a legend like King Arthur? Proposing that David was a genuine historical figure, Ahmed Osman explores how his identity may be radically different than what is described in religious texts.
Drawing on recent archaeological, historical, and biblical evidence from Egypt, Osman shows that David lived in Thebes, Egypt, rather than Jerusalem; that he lived five centuries earlier than previously thought, during the 15th rather than the 10th century B.C.; and that David was not a descendant of Isaac but was, in fact, Isaac’s father. The author also reveals David’s true Egyptian identity: Pharaoh Tuthmosis III of the 18th Dynasty.
Confirming evidence from rabbinic literature that indicates Isaac was not Abraham’s son, despite the version provided in Genesis, Osman demonstrates how biblical narrators replaced David with Abraham the Hebrew to hide the Egyptian identity of Isaac’s father. He shows how Egyptian historical and archaeological sources depict figures that match David’s and Solomon’s known characteristics in many ways, including accounts of a great empire between the Euphrates and the Nile that corresponds with David’s empire as described in the Bible. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.crosswalk.com/faith/bible-study/interesting-facts-about-solomons-temple.html | Where Was Solomon's Temple in the Bible - 7 Interesting Facts | 7 Interesting Facts about Solomon's Temple
In 1883, a biblical scholar, Thomas Newberry, designed a three-dimensional model of Solomon’s Temple as part of the Anglo-Jewish Historical Exhibition, heightening the interest of Jews and Christians alike in the first Jewish temple. The story of Solomon’s commitment to build a permanent house for God engages believers’ imagination. It was stately and beautiful from a human standpoint but also conceived in the heart of another king who dearly loved God. The desire to understand more about the background of its construction and the modern-day controversy surrounding its existence sends believers back to the Scriptures to discover where, how, and why it was constructed, and why God chose Solomon to build it.
Here are some interesting facts Christians should know about Solomon’s Temple:
Who Built Solomon's Temple?
1. Solomon Was Chosen by God to Build the Temple
King David, Solomon’s father, lived in the royal palace, but he was concerned that God’s priests still had to serve Him in the 400-year-old, portable Tabernacle from the wilderness wanderings. David wanted to build a permanent house for God, and a resting place for the Ark of the Covenant (1 Chronicles 28:2). The prophet Nathan initially gave David approval to begin construction, but God spoke to Nathan in a dream. God said David would not be the one to build His house, even though David had a heart after God’s own heart. David would, however, draw up the plans and accumulate materials for the building (1 Chronicles 22:2-4; 22:14-17; 29:2-9).
David was a great warrior king who united the Israelite tribes, captured Jerusalem, and chose Mount Moriah as the site for a future temple. But God said, “You will not build a house for my name for you are a man of battles and have shed blood” (1 Chron 28:3). The honor and responsibility of building the temple would go to his son, Solomon—whose name means “peace.”
2. The Temple Was Solomon’s Crowning Achievement
The temple was not only designed to be a place of sacrifice, but it would also motivate Israel to turn away from the idols of surrounding nations and evil practices of the Canaanites. King Solomon had the wherewithal to build. He inherited his father’s kingdom and extraordinary wealth, but he also accumulated great personal wealth. Known as an ambitious builder of public works, Solomon’s crowning achievement was the building of the Temple. Its location, Mount Moriah, was where God appeared to David, and also where Abraham offered Isaac as a sacrifice (2 Chron. 3, Gen 22).
Construction on the temple began after David’s death. If Solomon reigned from 970-930 BC, the temple construction began in 966 BC. Accounts of the building are given in 1 Kings 5-8 and 2 Chronicles 2-4. A summary in 1 Kings 6 describes the dimensions, windows “high up in the temple walls,” side rooms, quarried stones, stairway to connect three levels, wood panels and flooring, elaborate carvings, gold overlay in the whole interior and on the altar in the inner sanctuary, cherubim made from olive wood and overlaid with gold, and olive wood and juniper doors. The temple was constructed in all its parts and according to all its specifications over a span of seven years.
How Much Money Was Spent on Solomon's Temple?
3. Solomon Spared No Expense
King Hiram of Tyre—King David’s friend—supplied the wood and high-grade stones (1 Kings 5:1-18). The wood was shipped to Joppa by sea on rafts, and subsequently transferred by land to Jerusalem (2 Chron. 2). An unbelievable fact concerning Solomon’s temple was the quietness. Quarried stones were hewn beforehand and transported to the building site. There were no hammers, axes, or iron tools heard in the temple while it was being built (1 Kings 6:7).
The foundation of the temple was—using standard cubits—sixty cubits long and twenty cubits wide, and 30 cubits high. The porch at the front was 20 cubits long across the width of the building and 20 cubits high, and it projected 10 cubits from the front of the temple (2 Chron. 3, 1 Kings 6). This has been translated into feet as the following: “The Temple is 2,700 square feet. … a porch or vestibule, 15 feet deep and 30 feet wide; The Holy Place, 60 feet long and 30 feet wide; and the Holy of Holies—which was a perfect cube—30 feet long, 30 feet wide, and 30 feet high. The interior height of the rest of the building was 45 feet.”
Solomon spared no expense in the construction. The temple was adorned with precious stones, and with gold from Parvaim in Arabia (2 Chron. 3). According to Biblecharts.org, the cost of building the temple today has been estimated to be equal to three to six billion dollars. The debt was so huge, Solomon had to pay off King Hiram by giving him twenty towns in Galilee (1 Kings 9:11). Solomon conscripted thousands of sojourner laborers from all of Israel, plus 3,300 foremen to manage the construction (1 Kings 5:13-18).
After the Temple was built, the Tabernacle was dismantled. According to Lambert Dolphin at TempleMount.org, some rabbis and authorities in Jerusalem believe it was originally stored in a room under the Temple Mount.
4. The Beautiful Temple Was Dedicated and Celebrated
At its completion, Solomon declared a huge inauguration celebration to dedicate the building in 953 BC. The ceremonies included a sermon and a dedication offering that included 22,000 oxen and 120,000 sheep. Then Solomon proclaimed a great, 14-day feast. (2 Chron. 6, 1 Kings 8)
His prayer of dedication was a combination of praise, an exaltation of God, and encouragement of the people. Solomon praised God’s faithfulness to His covenant people. He encouraged the people about the future, telling them God would hear them from the temple. It would be a place of refuge for them, and even a place for people from other countries to pray.
Immediately after Solomon’s prayer, fire from heaven ignited the offerings on the altars, and the glory of God filled the temple, forcing the priests to stay outside the temple, and causing the people to fall on their faces in worship and thanksgiving (2 Chronicles 7:1-6). Solomon made it clear that the temple was not dedicated to “contain” God (1 Kings 8:27). In the New Testament, the martyr Stephen confirmed this. “Solomon built God a house,” Stephen said. “However the Most High does not dwell in temples made with hands….”
The people in Solomon’s temple offered praise to God alone. There were no idols in the temple, which would represent man trying to appease God. Instead, the temple included the Ark of the Covenant with the Mercy Seat, which reminded Israel of their need for salvation in God.
What Happened to Solomon's Temple?
5. Solomon’s Temple Was Destroyed and Rebuilt
Eventually, Solomon’s heart wandered from God. When he died, the nation—which was already in decline—split into two parts with two substitute places of worship in Bethel and Dan; and idolatry again became a part of Israel’s spiritual culture (1 Kings 12:25-31).
The temple declined in wealth and importance for 367 years. Jeremiah 25 warned that Jerusalem would be destroyed, and the people would be taken into captivity. The temple on Mount Moriah—now known as the Temple Mount—was looted and destroyed by the Babylonians under King Nebuchadnezzar II in about 587 BC. A second temple was later erected on the same site—described in the book of Ezra. Then, during the first century AD, Herod—the appointed head of Judea—enlarged and expanded that second temple and surrounding areas.
This reconstructed temple included a restored porch—described as Solomon’s Colonnade in Acts 5:12. The Jewish historian Josephus described it in Jewish Antiquities: “There was a porch without the temple, overlooking a deep valley, supported by walls of four hundred cubits, made of four square stone, very white; the length of each stone was twenty cubits, and the breadth six; the work of King Solomon, who first founded the whole temple”
Herod’s Temple would be destroyed, Jesus told His disciples (Luke 21:5-6); and indeed, the Romans under Emperor Vespasian destroyed it during the siege of Jerusalem in AD 70. Only a small portion of the retaining wall remains—the so-called “Wailing Wall.” Jesus said the temple site would continue to be “trampled by the Gentiles… until the times of the Gentiles are fulfilled” (Luke 21:24).
Today, traditional and observant Jews pray three times a day for the Temple’s restoration. The Bible says a new temple will be built in Jerusalem by the Jews prior to Jesus’ Second Coming—a temple that will be desecrated by the anti-Christ.
6. There Has Been Interference with the Temple’s Archaeological History
One problem with finding evidence for Solomon’s Temple today is Muslim interference with Jewish archaeological digs on the Temple Mount. For example, in the mid-1990s, the Muslim Waqf used heavy equipment to bulldoze ancient structures and take rich archaeological materials to a dump where they were mixed with modern trash. Later, Israeli archaeologists were allowed to sift through the dumped material, and they found “a wealth of artifacts”—but not necessarily from Solomon’s Temple.
According to an article in Smithsonian Magazine, in 1929, Muslim historian Aref al Aref declared that the Mount’s “identity with the site of Solomon’s temple is beyond dispute.” But in recent decades—with increasing fighting over the sovereignty of areas of Jerusalem, including the Temple Mount—Palestinians are back-peddling. In 2000, Palestinian leader Yasir Arafat suggested to President Bill Clinton that the Temple Mount might have been in the West Bank town of Nablus—ancient Shechem and one of the largest Palestinian cities today—instead of Jerusalem.
7. Some of the Archaeological Findings Are Controversial
Some modern scholars doubt the existence of Solomon’s temple, because, they say, it is not mentioned in extra-biblical accounts. But there are extrabiblical accounts. For instance, Josephus wrote in Jewish Antiquities: “… the temple was burnt four hundred and seventy years, six months, and ten days after it was built.” Jewish scholars and Hebrew archaeologists are adamant about the existence of Solomon’s temple. Professor Israel Finkelstein, an expert on Jerusalem archaeology, said, “There is no scholarly school of thought that doubts the existence of the First Temple.”
Discover Magazine pointed out that skeptics not only question whether Solomon’s temple was a real place, they also look for archaeological evidence for the existence of David and Solomon themselves! But in 1993, while digging at Tel Dan in northern Israel, an archeologist found a large stone with Aramaic writing—known as the Tel Dan stele. The stone records a conflict with the kings of Israel and proclaimed victory over the “house of David.” Although the stele’s creation was likely more than a century after Solomon’s death, it does provide evidence for skeptics that David was a real person.
In recent years, a number of tiny artifacts have been unearthed on the Temple Mount that Israeli archaeologists say are conclusively dated to the time of the First Temple. Radiocarbon dates of the artifacts put the discovery site in the center of Solomon’s reign. In spite of new discoveries and continuing controversies, biblical Christians continue to point to the Scriptures themselves, choosing to believe the Word of God rather than the claims of skeptics.
Dawn Wilson has served in revival ministry and missions for more than 50 years. She and her husband Bob live in Southern California. They have two married sons and three granddaughters. Dawn assists author and radio host Nancy DeMoss Wolgemuth with research and works with various departments at Revive Our Hearts. She is the founder and director of Heart Choices Today, publishes Upgrade with Dawn, and writes for Crosswalk.com. | 7 Interesting Facts about Solomon's Temple
In 1883, a biblical scholar, Thomas Newberry, designed a three-dimensional model of Solomon’s Temple as part of the Anglo-Jewish Historical Exhibition, heightening the interest of Jews and Christians alike in the first Jewish temple. The story of Solomon’s commitment to build a permanent house for God engages believers’ imagination. It was stately and beautiful from a human standpoint but also conceived in the heart of another king who dearly loved God. The desire to understand more about the background of its construction and the modern-day controversy surrounding its existence sends believers back to the Scriptures to discover where, how, and why it was constructed, and why God chose Solomon to build it.
Here are some interesting facts Christians should know about Solomon’s Temple:
Who Built Solomon's Temple?
1. Solomon Was Chosen by God to Build the Temple
King David, Solomon’s father, lived in the royal palace, but he was concerned that God’s priests still had to serve Him in the 400-year-old, portable Tabernacle from the wilderness wanderings. David wanted to build a permanent house for God, and a resting place for the Ark of the Covenant (1 Chronicles 28:2). The prophet Nathan initially gave David approval to begin construction, but God spoke to Nathan in a dream. God said David would not be the one to build His house, even though David had a heart after God’s own heart. David would, however, draw up the plans and accumulate materials for the building (1 Chronicles 22:2-4; 22:14-17; 29:2-9).
David was a great warrior king who united the Israelite tribes, captured Jerusalem, and chose Mount Moriah as the site for a future temple. But God said, “You will not build a house for my name for you are a man of battles and have shed blood” (1 Chron 28:3). The honor and responsibility of building the temple would go to his son, Solomon—whose name means “peace.”
2. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.nationalgeographic.com/pages/article/national-treasure-freemasons-fact-and-fiction | "National Treasure": Freemasons, Fact, and Fiction | "National Treasure": Freemasons, Fact, and Fiction
With an outrageous plot, "National Treasure" seems like a film that is mostly for entertainment with a sprinkling of historical fact.
Published November 19, 2004
• 5 min read
Imagine this: Centuries ago an order of European knights amassed a huge treasure of priceless artifacts from around the world.
The loot was later brought to the United States by the Freemasons, a secret society. Determined to keep it out of the hands of the British during the American Revolution, Benjamin Franklin and other Masons hid the treasure in a secret location but left clues to its whereabouts in famous American landmarks.
Now, the great-great-great-great-great-grandson of a carriage boy who learned the secret vows to find the treasure. The clues lead him to an invisible map hidden on the back of the U.S. Declaration of Independence.
Preposterous? Absolutely.
But the plot of National Treasure, the adventure yarn starring Nicolas Cage that opens in U.S. movie theaters today, is also irresistible fun.
It's become a bona fide recipe for success: Invent an old-fashioned treasure hunt, fill it with conspiracies and secret codes, and set it against a backdrop of real history.
When Dan Brown cooked up a similarly far-out plot in his runaway bestseller The Da Vinci Code—about a 2,000-year-old secret it claimed has been concealed by the Catholic Church—readers flocked to religious and historical texts to learn more about what really happened.
Will National Treasure do the same for moviegoers?
"I hope it gets people interested in the past," said Jim Kouf, who co-wrote the screenplay. "After seeing the movie, my daughter grabbed a copy of the Declaration of Independence and brought it to school with her. That was very exciting."
Freemasons
For an indication of the public's fascination with secret societies and conspiracy theories, jump on to the Internet, where thousands of wild Web sites claim that shadowy alliances do everything from running international affairs to managing interplanetary treaties.
Perhaps the most famous secret society is the Freemasons, a medieval guild of stonemasons that formed in England in the early 18th century and developed into a powerful fraternity.
The Freemasons have enjoyed a reputation as influential politicians, scientists, and artists whose works and charities have enhanced the world. Some Christian leaders, however, have called it a secret society bent on spreading evil.
Of the 55 men who signed the Declaration of Independence, at least 9 are said to have been Freemasons. President George Washington was also among its members.
In the new movie the Freemasons are seen in a positive light.
"The Masons were founded on pretty solid principles, and a lot of those held for the Founding Fathers and probably influenced them a great deal toward democracy at the time," said Kouf, whose grandfather was a Freemason. "When Washington had trouble raising his army, he called upon his Masonic brothers, because he knew he could count on them."
There is a tenuous link between the Freemasons and the Knights Templar, a mysterious order of knights founded in 1119 to protect Christian pilgrims on their way to the Holy Land. Many Freemasons today say they are the spiritual descendants of the knights.
According to legend, the Knights Templar discovered the greatest treasure in human history buried beneath the Temple of Solomon in Jerusalem. What is true, scholars say, is that the knights became wealthy and powerful, and they may have rivaled the influence of some European kings.
"They're mysterious because they were so sensationally successful," said Lisa Bitel, a history professor at the University of Southern California, Los Angeles. "The idea behind lots of these conspiracy and treasure stories is that any individual could happen upon a forgotten relic of the past, join with other like-minded mavericks, and use this relic for personal redemption or universal good."
But in the early 1300s, the knights were suppressed and executed. Whether they found Solomon's treasure is not known. No treasure map has ever been found.
Spoiler warning: Do not read on if you prefer not to learn a key plot development in the The Da Vinci Code.
Raising Questions
Although the medieval knights also feature prominently in The Da Vinci Code, it was that novel's main plot twist—that Jesus Christ married Mary Magdalene—which stirred up real controversy. Could this be true?
"There's no evidence for it in any text," said Joseph Kelly, a professor of religious studies at John Carroll University in Cleveland, Ohio, who has given numerous public lectures disproving the "secrets" in Brown's novel.
Kelly says many people are disappointed when he tells them that the marriage never happened. Yet the academic says there are many things in the book that are historically accurate, and he believes the novel serves a valuable purpose.
"Brown tells people something they never knew—that the early history of Christianity was much more complicated than anybody thought," he said.
Kouf, the movie scribe, sees little danger in weaving together fiction and history.
"If we were laying it out as a true story, then I'd agree that we're taking too many liberties," he said. "But because it's set out in an adventure mold like Indiana Jones, I think we're OK. People know some of this stuff didn't happen."
Still, Kouf, who considers himself a "history nut," said he tried to include as many references to U.S. history—and use as many real locations—as possible.
"Mostly we set out to have a rollicking good time," he said. "But if it gets people to also look at history differently and pick up a book about the Founding Fathers, that's great."
An oil rig that environmentalists love? Here’s the real story.
Off the coast of California, an oil platform named Eureka is nearing the end of its lifespan and its time in the ocean—but it is home to a thriving ecosystem of marine life comparable to natural coral reef systems. | Many Freemasons today say they are the spiritual descendants of the knights.
According to legend, the Knights Templar discovered the greatest treasure in human history buried beneath the Temple of Solomon in Jerusalem. What is true, scholars say, is that the knights became wealthy and powerful, and they may have rivaled the influence of some European kings.
"They're mysterious because they were so sensationally successful," said Lisa Bitel, a history professor at the University of Southern California, Los Angeles. "The idea behind lots of these conspiracy and treasure stories is that any individual could happen upon a forgotten relic of the past, join with other like-minded mavericks, and use this relic for personal redemption or universal good."
But in the early 1300s, the knights were suppressed and executed. Whether they found Solomon's treasure is not known. No treasure map has ever been found.
Spoiler warning: Do not read on if you prefer not to learn a key plot development in the The Da Vinci Code.
Raising Questions
Although the medieval knights also feature prominently in The Da Vinci Code, it was that novel's main plot twist—that Jesus Christ married Mary Magdalene—which stirred up real controversy. Could this be true?
"There's no evidence for it in any text," said Joseph Kelly, a professor of religious studies at John Carroll University in Cleveland, Ohio, who has given numerous public lectures disproving the "secrets" in Brown's novel.
Kelly says many people are disappointed when he tells them that the marriage never happened. Yet the academic says there are many things in the book that are historically accurate, and he believes the novel serves a valuable purpose.
"Brown tells people something they never knew—that the early history of Christianity was much more complicated than anybody thought," he said.
Kouf, the movie scribe, sees little danger in weaving together fiction and history.
| yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.britannica.com/biography/Solomon | Solomon | Sources, Meaning, Temple, & Facts | Britannica | What is Solomon most famous for?
Solomon is known for being the king of Israel who built the first Temple in Jerusalem. He was also the second (after his father, David) and last king of a unified Israel, which was at the height of its power during his reign. He is known for stories told in the Bible about his wisdom.
What was the religion of Solomon?
The religion of Solomon was Judaism, the monotheistic religion of the ancient Hebrews. However, as related in the first book of Kings, Solomon had many foreign women among his wives. “His wives turned his heart after other gods” (1 Kings 11:4, NIV), and thus he built shrines to the gods of their religions. In the biblical account of his reign, God tells Solomon he will punish him for his apostasy by breaking up his kingdom after his death.
Who were Solomon’s sons?
Solomon was succeeded by his son Rehoboam, who continued the harsh policies of his father toward the northern tribes of Israel. The northern tribes seceded and made Jeroboam, an official of Solomon who had led a rebellion against him, king. According to Ethiopian tradition, their emperors were descended from Menilek I, who was the son of Solomon and the Queen of Sheba.
Background and sources
Nearly all evidence for Solomon’s life and reign comes from the Bible (especially the first 11 chapters of the First Book of Kings and the first nine chapters of the Second Book of Chronicles). According to those sources, his father was David (flourished c. 1000 bce), the poet and king who, against great odds, founded the Judaeandynasty and united all the tribes of Israel under one monarch. Solomon’s mother was Bathsheba, formerly the wife of David’s Hittite general, Uriah. She proved to be adept at court intrigue, and through her efforts, in concert with the prophet Nathan, Solomon was anointed king while David was still alive, despite the fact that he was younger than his brothers.
Material evidence for Solomon’s reign, as for that of his father, is scant. Although some scholars claim to have discovered artifacts that corroborate the biblical account of his reign in the early 10th century bce, others claim that the archaeological record strongly suggests that the fortified cities and even the Temple of Jerusalem actually emerged more than a century later. In the latter view, the kingdom of Solomon was far from the vast empire that the biblical narrative describes.
Reign
The Bible says that Solomon consolidated his position by liquidating his opponents ruthlessly as soon as he acceded to the throne. Once rid of his foes, he established his friends in the key posts of the military, governmental, and religious institutions. Solomon also reinforced his position through military strength. In addition to infantry, he had at his disposal impressive chariotry and cavalry. The eighth chapter of 2 Chronicles recounts Solomon’s successful military operations in Syria. His aim was the control of a great overland trading route. To consolidate his interests in the province, he planted Israelite colonies to look after military, administrative, and commercial matters. Such colonies, often including cities in which chariots and provisions were kept, were in the long tradition of combining mercantile and military personnel to take care of their sovereign’s trading interests far from home. Megiddo, a town located at the pass through the Carmel range connecting the coastal plain with the Plain of Esdraelon, is the best-preserved example of one of the cities that Solomon is said to have established.
Palestine was destined to be an important centre because of its strategic location for trade by land and sea. It alone connects Asia and Africa by land, and, along with Egypt, it is the only area with ports on the Atlantic-Mediterranean and Red Sea–Indian Ocean waterways. Solomon is said to have fulfilled the commercial destiny of Palestine and brought it to its greatest heights. The nature of his empire was predominantly commercial, and it served him and friendly rulers to increase trade by land and sea. One particularly celebrated episode in the reign of Solomon is the visit of the Queen of Sheba, whose wealthy southern Arabian kingdom lay along the Red Sea route into the Indian Ocean. Solomon needed her products and her trade routes for maintaining his commercial network, and she needed Solomon’s cooperation for marketing her goods in the Mediterranean via his Palestinian ports. Biblical legend makes much of a romance between the Queen and Solomon, and his granting her “all that she desired, whatever she asked” (1 Kings 10:13) has been interpreted to include a child.
Tradition recognizes Solomon as an ambitious builder of public works. The demand for fortresses and garrison cities throughout his homeland and empire made it necessary for Solomon to embark on a vast building program, and the prosperity of the nation made such a program possible. He was especially lavish with his capital, Jerusalem, where he erected a city wall, the royal palace, and the first famous Temple. Around Jerusalem (but not in the Holy City itself), he built facilities, including shrines, for the main groups of foreigners on trading missions in Israel. Solomon’s Temple was to assume an importance far beyond what its dimensions might suggest, for its site became the site of the Second Temple (c. 5th century bce–70 ce).
Get a Britannica Premium subscription and gain access to exclusive content. | What is Solomon most famous for?
Solomon is known for being the king of Israel who built the first Temple in Jerusalem. He was also the second (after his father, David) and last king of a unified Israel, which was at the height of its power during his reign. He is known for stories told in the Bible about his wisdom.
What was the religion of Solomon?
The religion of Solomon was Judaism, the monotheistic religion of the ancient Hebrews. However, as related in the first book of Kings, Solomon had many foreign women among his wives. “His wives turned his heart after other gods” (1 Kings 11:4, NIV), and thus he built shrines to the gods of their religions. In the biblical account of his reign, God tells Solomon he will punish him for his apostasy by breaking up his kingdom after his death.
Who were Solomon’s sons?
Solomon was succeeded by his son Rehoboam, who continued the harsh policies of his father toward the northern tribes of Israel. The northern tribes seceded and made Jeroboam, an official of Solomon who had led a rebellion against him, king. According to Ethiopian tradition, their emperors were descended from Menilek I, who was the son of Solomon and the Queen of Sheba.
Background and sources
Nearly all evidence for Solomon’s life and reign comes from the Bible (especially the first 11 chapters of the First Book of Kings and the first nine chapters of the Second Book of Chronicles). According to those sources, his father was David (flourished c. 1000 bce), the poet and king who, against great odds, founded the Judaeandynasty and united all the tribes of Israel under one monarch. Solomon’s mother was Bathsheba, formerly the wife of David’s Hittite general, Uriah. She proved to be adept at court intrigue, and through her efforts, in concert with the prophet Nathan, Solomon was anointed king while David was still alive, despite the fact that he was younger than his brothers.
Material evidence for Solomon’s reign, as for that of his father, is scant. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://www.biblicalarchaeology.org/daily/biblical-sites-places/biblical-archaeology-sites/searching-for-the-temple-of-king-solomon/ | Searching for the Temple of King Solomon - Biblical Archaeology ... | Searching for the Temple of King Solomon
For centuries, scholars have searched in vain for any remnant of Solomon’s Temple. The fabled Jerusalem sanctuary, described in such exacting detail in 1 Kings 6, was no doubt one the most stunning achievements of King Solomon in the Bible, yet nothing of the building itself has been found because excavation on Jerusalem’s Temple Mount, site of the Temple of King Solomon, is impossible.
Fortunately, several Iron Age temples discovered throughout the Levant bear a striking resemblance to the Temple of King Solomon in the Bible. Through these remains, we gain extraordinary insight into the architectural grandeur of the building that stood atop Jerusalem’s Temple Mount nearly 3,000 years ago.
The black basalt ruins of the Iron Age temple discovered at ’Ain Dara in northern Syria offer the closest known parallel to the Temple of King Solomon in the Bible. Photo: Ben Churcher.
As reported by archaeologist John Monson in the pages of BAR, the closest known parallel to the Temple of King Solomon is the ’Ain Dara temple in northern Syria. Nearly every aspect of the ’Ain Dara temple—its age, its size, its plan, its decoration—parallels the vivid description of the Temple of King Solomon in the Bible. In fact, Monson identified more than 30 architectural and decorative elements shared by the ’Ain Dara structure and the Jerusalem Temple described by the Biblical writers.
From Babylon to Baghdad: Ancient Iraq and the Modern Westexamines the relationship between ancient Iraq and the origins of modern Western society. This free eBook details some of the ways in which ancient Near Eastern civilizations have impressed themselves on Western culture and chronicles the present-day fight to preserve Iraq’s cultural heritage.
The ’Ain Dara temple and the Biblical Temple of King Solomon share very similar plans. Images: Ben Churcher.
The similarities between the ’Ain Dara temple and the temple described in the Bible are indeed striking. Both buildings were erected on huge artificial platforms built on the highest point in their respective cities. The buildings likewise have similar tripartite plans: an entry porch supported by two columns, a main sanctuary hall (the hall of the ’Ain Dara temple is divided between an antechamber and a main chamber) and then, behind a partition, an elevated shrine, or Holy of Holies. They were also both flanked on three of their sides by a series of multistoried rooms and chambers that served various functions.
Even the decorative schemes of ’Ain Dara temple and the temple described in the Bible are similar: Nearly every surface, both interior and exterior, of the ’Ain Dara temple was carved with lions, mythical animals (cherubim and sphinxes), and floral and geometric patterns, the same imagery that, according to 1 Kings 6:29, adorned the Temple of King Solomon in the Bible.
It is the date of the ’Ain Dara temple, however, that offers the most compelling evidence for the authenticity of the Biblical Temple of King Solomon. The ’Ain Dara temple was originally built around 1300 B.C. and remained in use for more than 550 years, until 740 B.C. The plan and decoration of such majestic temples no doubt inspired the Phoenician engineers and craftsmen who built Solomon’s grand edifice in the tenth century B.C. As noted by Lawrence Stager of Harvard University, the existence of the ’Ain Dara temple proves that the Biblical description of Solomon’s Temple was “neither an anachronistic account based on later temple archetypes nor a literary creation. The plan, size, date and architectural details fit squarely into the tradition of sacred architecture from north Syria (and probably Phoenicia) from the tenth to eighth centuries B.C.”
Gigantic footprints belonging to the resident deity were carved at the temple’s entrance. Photo: A.M. Appa.
Certain features of the ’Ain Dara temple also provide dramatic insight into ancient Near Eastern conceptions of gods and the temples in which they were thought to reside. Carved side-by-side in the threshold of the ’Ain Dara temple are two gigantic footprints. As one enters the antechamber of the sanctuary, there is another carving of a right foot, followed 30 feet away (at the threshold between the antechamber and the main chamber) by a carving of a left foot. The footprints, each of which measures 3 feet in length, were intended to show the presence (and enormity) of the resident deity as he or she entered the temple and approached his or her throne in the Holy of Holies. Indeed, the 30-foot stride between the oversize footprints indicates a god who would have stood 65 feet tall! In Solomon’s Temple, the presence of a massive throne formed by the wings of two giant cherubim with 17-foot wingspans (1 Kings 6:23–26) may indicate that some Israelites envisaged their God, Yahweh, in a similar manner.
FREE ebook: The First Christmas: The Story of Jesus’ Birth in History and Tradition. Download now.
First Name:*
Last Name:*
Email Address: ** Indicates a required field.
If you don’t want to receive the Bible History Daily newsletter, uncheck this box.
Very intereseting. But the temple at Tell Ain Dara does not prove the existence of the temple of king Salomon. On the contrary, the writer of Kings may well have invented its description based on the real Tell Ain Dara temple. If that were the case, the coincidences would not be striking, but completely expected. Without any archeological evidences, it is the simplest and most economical conclusion, isn’t it?
The similarity of another temple to Solomon’s temple only proves that the ability to build such a temple was available at the time.
There must have been the first temple, otherwise it could not have been destroyed by the Babylonians.
Why is there no archeological evidence? Not because there is no archeological access to Temple Mount, but because it was not built on Temple Mount. it was further south but still north of the City of David just outside the present city wall of present day Jerusalem.
The Temple Mount area was the Roman fort Antonio. The building at the north west corner was a watchtower and barracks for the officers. The main stone plateau was the barracks square. It is the same size as many other Roman forts around Europe.
Want more Bible history?
All-Access Pass
All-Access Pass
Dig into the world of the Bible with a BAS All-Access membership. Combine a one-year tablet and print subscription to BAR with membership in the BAS Library to start your journey into the ancient past today!
Very intereseting. But the temple at Tell Ain Dara does not prove the existence of the temple of king Salomon. On the contrary, the writer of Kings may well have invented its description based on the real Tell Ain Dara temple. If that were the case, the coincidences would not be striking, but completely expected. Without any archeological evidences, it is the simplest and most economical conclusion, isn’t it?
The similarity of another temple to Solomon’s temple only proves that the ability to build such a temple was available at the time.
There must have been the first temple, otherwise it could not have been destroyed by the Babylonians.
Why is there no archeological evidence? Not because there is no archeological access to Temple Mount, but because it was not built on Temple Mount. it was further south but still north of the City of David just outside the present city wall of present day Jerusalem.
The Temple Mount area was the Roman fort Antonio. The building at the north west corner was a watchtower and barracks for the officers. The main stone plateau was the barracks square. It is the same size as many other Roman forts around Europe. | Searching for the Temple of King Solomon
For centuries, scholars have searched in vain for any remnant of Solomon’s Temple. The fabled Jerusalem sanctuary, described in such exacting detail in 1 Kings 6, was no doubt one the most stunning achievements of King Solomon in the Bible, yet nothing of the building itself has been found because excavation on Jerusalem’s Temple Mount, site of the Temple of King Solomon, is impossible.
Fortunately, several Iron Age temples discovered throughout the Levant bear a striking resemblance to the Temple of King Solomon in the Bible. Through these remains, we gain extraordinary insight into the architectural grandeur of the building that stood atop Jerusalem’s Temple Mount nearly 3,000 years ago.
The black basalt ruins of the Iron Age temple discovered at ’Ain Dara in northern Syria offer the closest known parallel to the Temple of King Solomon in the Bible. Photo: Ben Churcher.
As reported by archaeologist John Monson in the pages of BAR, the closest known parallel to the Temple of King Solomon is the ’Ain Dara temple in northern Syria. Nearly every aspect of the ’Ain Dara temple—its age, its size, its plan, its decoration—parallels the vivid description of the Temple of King Solomon in the Bible. In fact, Monson identified more than 30 architectural and decorative elements shared by the ’Ain Dara structure and the Jerusalem Temple described by the Biblical writers.
From Babylon to Baghdad: Ancient Iraq and the Modern Westexamines the relationship between ancient Iraq and the origins of modern Western society. This free eBook details some of the ways in which ancient Near Eastern civilizations have impressed themselves on Western culture and chronicles the present-day fight to preserve Iraq’s cultural heritage.
The ’Ain Dara temple and the Biblical Temple of King Solomon share very similar plans. Images: Ben Churcher.
The similarities between the ’Ain Dara temple and the temple described in the Bible are indeed striking. Both buildings were erected on huge artificial platforms built on the highest point in their respective cities. | yes |
Archaeology | Was the Temple of Solomon real? | yes_statement | the "temple" of solomon was "real".. the existence of the "temple" of solomon is a historical fact. | https://jcpa.org/ancient-muslim-texts-confirm-the-jewish-temple-in-jerusalem/ | Ancient Muslim Texts Confirm the Jewish Temple in Jerusalem ... | Ancient Muslim Texts Confirm the Jewish Temple in Jerusalem
Jerusalem Center researcher Nadav Shragai responds to modern-day Muslim and Palestinian fabrications about the Jewish Temple in Jerusalem with the testimonies of esteemed Islamic religious authorities from more than 1,000 years ago. He presents archeological evidence such as a Jewish ritual bath found under the al-Aqsa mosque and Islamic coins with a Jewish menorah imprinted on them, and documents how the Jews of Jerusalem introduced the Muslim conquerors of the city to the Temple Mount and accompanied them on their visit there. This is a chapter from his latest book in Hebrew, Al-Aqsa Terror: From Blood Libel to Bloodshed (Jerusalem Center for Public Affairs, 2020).
The Palestinian Lie about Jerusalem Has Legs
“A lie,” according to the well-known saying, “has no legs,” but that does not mean lies do not need them.
The “Al-Aqsa is in danger” libel rests on a huge false leg that, in the end, will collapse. The lie would not have survived so long without it. Today, the Palestinians and many Muslims charge that Israel “seeks to destroy al-Aqsa” and build the Temple in its stead on a site where no Temple ever stood; that the Jewish Temple on the Temple Mount is al-miza’um, that is, “supposed,” “fraudulent,” “invented,” or “imaginary;” that the Jews have no connection to the Temple Mount or, for that matter, to the Western Wall.
This is a libel on top of a libel, a double lie. The many Muslims who are convinced that al-Aqsa is in danger are now also convinced that “their” al-Aqsa stands on a place where “our” Temple never stood – the latter being nothing but a fabrication.
Some of the legitimacy that terrorism draws from the libel rests on that added lie. It is more legitimate to libel and murder Jews, so as “to protect the captive al-Aqsa and free it from the Jews who are plotting to destroy it,” if Israel and the Jews who “conspire to attack the site,” have only a false and concocted connection to it. Thus, the lie that undergirds the libel also bolsters the legitimacy to murder in its name. From the standpoint of the “Al-Aqsa is in danger” terrorists and their supporters, they do not murder only those who seek to wrest the Mount from their hands. As they see it, they are also murdering the falsifiers of history, who have no link to the site at all. They also want the Mount to be “liberated” psychologically so that their historical and religious narrative will prevail. This chapter (the appendix of the book) aims to refute this lie as well and to prove that it is nothing but a broken prop.
To grasp the magnitude of the lie, one must go far back on the path the Muslims themselves trod over the past 1,350 years, the path from which they have strayed only in recent times. Despite the misrepresentations and the sweeping denial that many Muslims now adopt regarding the Jewish connection to the Temple Mount and to the Temple that stood there, they themselves were the ones who, up until the Six-Day War, identified the Mount – unequivocally – as the site of Solomon’s Temple and as the place where David said his Psalms. Furthermore, Solomon and David, as important prophets in Islam, are seen as the ones who laid the foundations on the Temple Mount for the building of the mosques there. Nevertheless, today, Muslim clerics and leaders remove the Jewish Temple from the Mount and “transfer” it to places like Mount Zion, Nablus, and even Yemen.
Moreover, many of the names and terms the Muslims have used over the years for the Temple Mount, particularly “Beit al-Maqdis,” which is a translation of the Hebrew name Beit haMikdash, derive from the Jewish designation for the site, where the two Muslim shrines were built around 1,350 years ago. Today, Muslims commonly use the name Beit al-Maqdis for Jerusalem, but in the ancient past, they used the name for the Temple Mount itself. The Jewish people and the State of Israel do not, of course, need the Muslim sources – which, for more than 1,350 years, have identified the Temple Mount as the site of the Temple – to prove their connection to the place. Given, however, the dispute on this issue and the resolutions hostile to Israel in the international arena, which espouse the new Muslim narrative, it is worth presenting the primary Muslim documentation and sources for the Jewish connection to Jerusalem, the Temple Mount, and the Temple. Today, many Muslims erase this reliable documentation from memory. From such forgetfulness, the path is short to denial, and this gives rise to a lie. On this lie now rests the libel from which the “Al-Aqsa is in danger” terror derives its inspiration and legitimacy to murder Jews.
The Writings of Al-Tabari
Let us first turn to the Muslim sages and exegetes of Islamic law over the centuries who refute this long list of lies:
Israel is plotting “to destroy the al-Aqsa Mosque and build the fictitious Temple under it,”
the Western Wall was never used for Jewish prayer before 1917;
As per the Palestinian Authority’s official newspaper, Tisha B’Av, the national day of mourning of the Jewish people, is “the anniversary of what is called ‘the destruction of the Temple;’”
A 1,100 year old gold medallion with quintessential Jewish symbols such as a menorah, a shofar, and a Torah scroll, which was found only 50 meters from the Temple Mount in an organized archaeological dig, is a “forgery.”
Although today’s Muslims rely on their sages’ writings regarding many issues, when it comes to the history of the Temple Mount, they seem to have been erased.
Foremost among these figures is the Persian historian Abu Jafar Muhammad bin Jarir al-Tabari (838-923), who was one of the first, leading, and best-known commentators of the Koran and the Islamic tradition. One of his ancient manuscripts, which carries a seal of al-Azhar – the world’s most important educational institution for Sunni Islam – was photographed and smuggled out of Cairo a few years ago by Noa Hasid, who is Muslim by origin, and brought to the Beirut-born Middle East scholar Dr. Edy Cohen of Bar-Ilan University. Cohen published the work in 2016. The text in itself offered nothing new; it had already appeared as part of a commentary on the Koran by al-Tabari, which was published in several editions. Nevertheless, as an original manuscript that was photographed and smuggled out of al-Azhar, it sparked great interest. Al-Tabari writes there, among other things, that “Beit al-Maqdis [the Temple Mount] was built by Solomon, son of David, and was made of gold, pearls, rubies, and of the precious stone peridot, paved with silver and gold, and its columns were of gold.”
At the foot of the Temple Mount. A gold medallion about 1,400 years old, adorned with an embossed menorah, a shofar, and a Torah scroll, found by Dr. Eilat Mazar in an excavation near the Ophel (Uriah Tadmor. All rights reserved to Dr. Eilat Mazar)
This documentation, from an Islamic figure of al-Tabari’s renown, undercuts the “revision” of the Temple Mount’s history by many Muslims in recent years. It stands against claims that invert the truth, according to which “the legend of the bogus Temple is the greatest crime of historical forgery,” and against entire books that have been written in that vein.
In his book History of the Prophets and Kings, al-Tabari refers several more times to the Temple Mount as the site of the Temple, and also identified Isaac, not Ishmael, as the hero of the “Binding of Isaac” story. The famous commentator described David’s and Solomon’s involvement in building a mosque on the Temple Mount in a way that corresponds exactly, in not a few details, to the Bible’s description of the process of building the Temple. This description is typical of other, similar descriptions in Islam that point to a strong, ongoing connection to Jewish traditions.
David wanted to begin building the mosque and Allah disclosed to him: It is indeed a sacred structure. You defiled your hands with blood and will not build. But you will have a son whom I will coronate after you and his name will be Solomon. Him I will cleanse of the blood. When King Solomon built the mosque and sanctified it, David was a hundred years old, when he heard of the Prophet Muhammad…. The period of his kingship was forty years.
For al-Tabari, Solomon (Suleiman ibn Daud [David]) is the main prophet responsible for the construction on the Mount, where the Muslims built their mosques.
The Muslim geographer Muhammad al-Idrisi, who visited Jerusalem in the 12th century, likewise described “the Temple Mount that Solomon ben David built.” He added that “in the vicinity of the eastern gate of the gates to the Dome of the Rock is the shrine that was called the Holy of Holies, and it is impressive to look upon.” He further attested that the Temple Mount “served as a place of pilgrimage in the era of the Jews and afterward was taken from them, and they were removed from it until the era of the reign of Islam.”
Yakut ibn Abdullah al-Rumi al-Hamawi (1179-1229), a Muslim biographer and geographer, in his book Lexicon geographicum used the term “the Temple,” and in describing its location, he wrote: “Indeed it is Jerusalem [Beit al-Maqdis] and his words to the Israelites were: we have set a meeting with you at the right side of the Mount of Olives, that is – Jerusalem [Beit al-Maqdis].” Later, in an explicit reference to the Temple, he added: “Solomon placed in the Temple [Beit al-Maqdis] wondrous things including the vault from which the heavy chain depends…. And as for al-Aqsa, indeed, it is on the eastern side, in the direction of the qibla, and it was David, peace be upon him, who founded it.”
Al-Tabari, al-Idrisi, and Yakut are not alone. Taki ad-Din Ahmad ibn Taymiyyah (1263-1328), a theologian and commentator from the Salafi school of Sunni Islam, likewise described the vicinity of the al-Aqsa Mosque as having been built by Solomon. Ibn Taymiyyah went back to the time when Omar conquered Jerusalem, and commingled al-Aqsa and the Temple:
The al-Aqsa Mosque is the name of all of the mosque built by Solomon, peace be upon him. Some of the people began to call it by the name al-Aqsa, the place of prayer for which Omar ibn al-Khattab, peace be upon him, built the facade. The prayer at this place that Omar built for the Muslims is incomparably better than in the other parts of the mosque. When Omar opened the Temple, there were huge quantities of garbage on the Rock because the Christians wanted to ransack the place of prayer where the Jews prayed. Omar, peace be upon him, ordered that the trash be removed from there.
One who elaborated even more is the renowned 14th-century historian Abd al-Rahman ibn Khaldun in his famous book, The Muqaddimah: An Introduction to History (one of the first written by a historian who tried to use scientific criteria and the first of its kind that deals with the social sciences). Ibn Khaldun described the building of the Tabernacle during the Israelites’ wanderings in the desert, the “building of a Tent of Congregation [the Tabernacle] on a wheel” after the Israelites’ conquest of the Land of Israel, and its conveyance to Shiloh and continued migrations. On the subject of the Temple, Ibn Khaldun wrote:
Solomon built the Temple in the fourth year of his reign, five hundred years after the death of Moses…. The doors and walls of the Temple he overlaid with gold…. On the back of the building he made an alcove for the Ark of the Covenant…. Thus, the Temple stood for as long as God wanted it to. Eight hundred years after it was built Nebuchadnezzar destroyed it…. After that, when the kings of the Persians restored the Israelites to their land, the Temple was again built by Ezra…. Subsequently, one after the other, they [the Jews] were governed by the kings of Greece, the Persians, and the Romans…. Herod built the Temple according to the measurements of the Temple of Solomon…. Helena destroyed the remnants of the Temple that she found and ordered that garbage be thrown on the Rock, until it was covered and its location was no longer known – in retaliation for what was done – according to what was believed – to the grave of the Messiah…. Thus the situation remained until the appearance of Islam and the conquest of the Land of Israel by the Arabs…. Caliph Omar came himself to accept the surrender of Jerusalem and asked about the Rock. They showed him its location…. Omar uncovered the Rock and built a mosque on it…. Eventually Caliph al-Walid ibn Abd al-Malik beautified the mosque building.
“In the vicinity of one of the gates to the Dome of the Rock is the Holy of Holies,” wrote al-Idrisi in the 12th century. The Rock that is identified with the Foundation Stone (Chromolithograph by H. Clerget and J. Gaildrau after François Edmond Pâris, 1862. Wellcome Collection/CCBY 4.0)
Another respected Muslim source, which indicates an entirely different Muslim attitude toward the Mount’s Jewish history than the one taken today, is the book by Mujir al-Din al-Ulaymi al-Hanbali, The History of Jerusalem and Hebron. Mujir al-Din (1456-1521) was a historian, geographer, and a judge in the Mamluk administration. He was born in Ramallah but lived his whole life in Jerusalem, toured the Land of Israel, and wrote travel books about Jerusalem, Hebron, Ramallah, and the Shfela (in today’s south-central Israel). In his book, Mujir al-Din identified the al-Aqsa Mosque with the location of the Temple, and in his descriptions, he referred several times to “the Temple Mosque.” He also referred to David and Solomon both as Muslim prophets and as descendants of the House of David monarchy. Mujir al-Din wrote that “David reigned 40 years and before his death bequeathed the kingdom to his son Solomon and ordered him to build the Temple [Beit al-Maqdis].” He added that “when Solomon finished building the Temple, he requested of Allah…wisdom that would befit his wisdom” and “requested of him kingship.”
The Location of Solomon’s Temple
And so, despite the widespread Muslim denial in our time, and along with numerous archaeological sources that we will survey, stands one basic fact: for hundreds of years, until 1967, the story of the Jewish Temple including details about it, and even information on the destruction of the First Temple by Nebuchadnezzar, was a firmly established and undeniable motif in Muslim literature of all kinds. In his book Jerusalem to Mecca and Back: The Islamic Consolidation of Jerusalem, Prof. Yitzhak Reiter enumerated additional classical Arab sources that identify the place where the al-Aqsa Mosque stands with the place where Solomon’s Temple stood:
The 10th-century Jerusalemite geographer and historian al-Maqdisi and the Iranian 14th-century jurist al-Mustawfi identified the al-Aqsa Mosque with Solomon’s Temple. In a 13th-century poem by Jalal al-Din al-Rumi, the building of Solomon’s Mosque was defined as the building of the al-Aqsa Mosque, and the Rock within the compound was usually the Arab designation of Solomon’s Temple and the heart of the al-Aqsa compound. In addition, Abu Bakr al-Wasati, who was an al-Aqsa preacher at the beginning of the 11th century, offered in his book of praises for Jerusalem different traditions that present the Jewish past of the Temple.
The Palestinian archaeologist, Dr. Marwan Abu Khalaf of Al-Quds University, is scrupulous, unlike many Palestinian archaeologists, in quoting from the words of the Christian pilgrim Arculf. Arculf visited the Land of Israel in 670, after the Arab conquest, and spent nine months in Jerusalem. He related that “on the site where the Temple once stood,” the Muslims built a mosque. Reiter referred in his study to an official historical document of the Organization of Islamic Cooperation (formerly the Organization of the Islamic Conference), which stated that “the Rock is the place on which Abraham bound his son [according to the Islamic tradition, Ishmael], and the place from which the Prophet Muhammad ascended to the heavens,” and that “this is the site on which Solomon and Herod built the First and the Second Temple.” Thus, the al-Aqsa Mosque – as written in an official document of the Muslim countries – is the place where the Temple of Solomon stood in Jerusalem, on Mount Moriah, a site that was sacred to both Jews and Christians.
Another, contemporary figure acknowledges that the Rock on the site of al-Haram al-Sharif (the Noble Sanctuary – a Muslim term for the Temple Mount) is the Foundation Stone mentioned in Jewish sources, and that holy Islam recognized the Rock as the Jews’ direction of prayer. Sheikh Abdul Hadi Palazzi, one of the heads of the Muslim community in Italy, also noted more than once that the Koran confirms the state of Israel’s right to the Land of Israel and Jerusalem.
The current, thorough Muslim denial of any Jewish connection to the Temple Mount applies to its walls as well, and particularly to the Western Wall. “The Jews have no right to the Western Wall,” claim, for example, both Sheikh Ekrima Sabri, the former mufti of Jerusalem, and the Al-Aqsa Association for Heritage and Waqf Preservation. They also assert that the “al-Buraq Wall [the Western Wall] is an exclusive Muslim waqf property.” Even Nasr Farid Wasil, former mufti of Egypt, contended that it is forbidden for Muslims to use the term Western Wall in lieu of its real name, the al-Buraq Wall. Al-Buraq was the animal on whose back, according to Islamic tradition, the Prophet Muhammad arrived from Mecca to Jerusalem.
The “Fabricated Shrine”
On the issue of the Western Wall, just as on the issue of the Temple Mount, contemporary Palestinians and Muslims consign to oblivion things that were written by learned Muslims – from their standpoint, experts – only in the past century. The most prominent among these is the Palestinian historian Aref al-Aref (1892-1973). A declared Palestinian nationalist, he directed the Rockefeller Archaeological Museum and, in the 1950s, served as mayor of Jordanian Jerusalem. Al-Aref included the Western Wall in the list of Jewish holy places in Jerusalem, and wrote, “It is the external wall of the Temple that was renovated by Herod…. And the Jews visit it often and particularly on Tisha B’Av, and when they visit it, they remember the glorious and unforgettable history and begin to cry.” Moreover, in his book History of Jerusalem, al-Aref states that “the location of al-Haram al-Sharif is on Mount Moriah that is mentioned in the book of Genesis, the place of the threshing floor of Araunah the Jebusite, which David purchased so as to build the Temple on it, and where Solomon built the Temple in 1007 BC.” He further added that “among the remnants of the Solomon era is the building that is under the al-Aqsa Mosque. The place was owned by the Jews for a certain period and afterward returned to the possession of the Muslims, who called it al-Haram al-Quds because it was holy to all the Muslims.
“The location of al-Haram al-Sharif is on Mount Moriah,” wrote Aref al-Aref. Seen here making a speech (at the center of the photo) to a Jerusalem assembly in 1920 (Library of Congress)
Even the Supreme Muslim Council, in the days of Grand Mufti Haj Amin al-Husseini (instigator of the 1929 Palestine riots and fierce opponent of Zionism), published a tourist guidebook that describes the Temple Mount as “one of the oldest [sites] in the world. Its sanctity dates from the earliest (perhaps from pre-historic) times. Its identity with the site of Solomon’s Temple is beyond dispute.” The guidebook adds that “this, too, is the spot, according to the universal belief, on which [2 Samuel 24:25] ‘David built there an altar unto the Lord, and offered burnt offerings and peace offerings.’”
Up to the year 2000, one could still find a few tourist guidebooks printed in Ramallah that acknowledged the true location of Solomon’s Temple as the Temple Mount. Prof. Sari Nusseibeh, former president of al-Quds University in east Jerusalem, former PLO representative in the city, and member of a well-respected Muslim family that has lived in Jerusalem since the seventh century is also one of the few Palestinians who have dared to come out against the phenomenon of Temple denial. In his book co-authored with Anthony David, Once Upon a Country: A Palestinian Life, Nusseibeh referred to Yasser Arafat’s assertion, after the failure of the Camp David Summit (September 2000), that Solomon’s Temple was built in Yemen. “When I heard this,” Nusseibeh wrote, “I was filled with fear lest the chairman was losing all ties with reality.” Nusseibeh thereby acknowledged that today’s Islamic thinkers are distorting the history of Jerusalem, and noted as well that also “tour guide books that were printed in Syria over 100 years ago called the area on which the Dome of the Rock stands the Jewish Temple. These things were written as something accepted.”
Another deviation from the current Muslim narrative was documented by the Middle East scholar Dr. Yaron Ovadia on the governmental website (in Hebrew) “The Heritage of Israel on the Temple Mount.” Ovadia recently pointed to a book that was published in Arabic in 2017, Writings of Solomon, which gives the story of King Solomon and the building of the Temple. It states, among other things, that Solomon engaged for seven years in building the Temple at the site of Araunah the Jebusite’s threshing floor, and that the Temple stood there until the Babylonians destroyed it in 586; “after that Zerubavel built it again with the approval of the Persian King Cyrus…. After that, the Maccabees renovated it, and after them, Herod renovated it in 26 BC.”
The Palestinians’ repudiation of the historical truth, which they too recognized in the past, occurred slightly before the Six-Day War, and for the most part after it. Already in 1966, the Supreme Muslim Council reprinted the Abbreviated Guide to al-Haram al-Sharif. Although this work quoted from the words of Aref al-Aref about the Western Wall, it omitted from them a prior reference by the Muslim historian to the Wall’s holiness for Jews and instead emphasized its holiness for Muslims. And whereas, in guidebooks it had published in the 1920s and 1930s, the Supreme Muslim Council had unequivocally identified the Temple Mount as the location of Solomon’s Temple, a guidebook it published in the 1990s, for example, stated: “The beauty and serenity of the al-Aqsa Mosque in Jerusalem attracts thousands of visitors from all faiths annually. Some believe this was the location of the Temple of Solomon, blessing and peace be upon him, which was destroyed by Nebuchadnezzar in 586 BC, or the site of the Second Temple, which was destroyed utterly by the Romans in 70 AD, though there are no historical documents or archaeological testaments that corroborate this.”
(courtesy of Gabi Barkay)“Its identity with the site of Solomon’s Temple is beyond dispute,” is written in a guidebook to the Temple Mount that the Supreme Muslim Council published in 1924 (courtesy of Gabi Barkay)
We see, then, from assertions made in the course of more than 1,350 years, that many Muslims have changed their outlook to “some believe,” and at present, the Jewish Temple on the Temple Mount is called “the fabricated shrine,” with the word “shrine” referring to Solomon’s Temple and the word “fabricated” to its fraudulence.
Jewish Overlap with Muslims
Against the backdrop of the ancient and rich Muslim documentation of the Jewish connection to the Temple Mount stands Islam’s structural and hardly coincidental similarity to Judaism, which it directly drew upon in its early days. Muhammad was strongly influenced by Judaism and by the Jews, who were his neighbors in the Arabian Peninsula (Hejaz), particularly in the city of Medina. He tried without success to convert some of them to Islam, and in an attempt to convince the Jews of Medina to convert, he called upon his believers to pray in the direction of Jerusalem (the first qibla). Only after they refused did he order his believers to pray toward Mecca.
Already at its outset, Islam adopted basic Jewish traditions such as the prohibition on eating pork, a daily regimen of prayers, circumcision, fast days, building houses of prayer, as well as sacred exegetical literature, a kind of “oral Torah.” Prof. Hava Lazarus-Yafeh, a prominent scholar of Islamic culture, notes that many Jewish materials were assimilated in one way or another into the Koran (such as the stories of the Patriarchs, the story of Abraham and the idols, and the stories of Joseph, Moses, David, and Solomon). The Koran devotes its 17th chapter, the Night Journey surah, to the Israelites. The chapter begins with Muhammad’s Night Journey from Mecca to the al-Aqsa Mosque, then immediately mentions the giving of the Torah to Moses and hints at the destruction of the two Temples.
Moreover, Islamic hadiths and writers asserted that it is possible to identify Koran verses that were taken from “the true Torah” and from Bible stories. For example, fabled descriptions of the conversion to Islam of the Jewish sage Kab al-Ahbar in 638 say that at least ten specific verses in the Koran were to be found in “the true Torah.” The thinker al-Ghazali (d. 1111) said so as well, and so did Ibn Qayyim al-Juziyah (d. 1350), who wrote: “Some [of these verses] are found in the Torah, and these they [the Jews] hold in their possession, and also in the prophecies of Isaiah and in the books of other prophets.”
Ignáz Goldziher (1850-1921), the great Jewish scholar of Islam, commented once in this context that the problem of the historical authenticity of the Islamic hadith literature of the Sunna (considerable parts of which are also identical or similar to Jewish texts that precede it) reminded him of a saying of the Jewish sages in the tractate Hagigah of the Mishnah: “Anything an experienced student can point out to his rabbi was already said to Moses on Sinai.” Lazarus-Yafeh remarks that this idea is “formulated in Islam in a paradoxical statement that is attributed to the Prophet Muhammad himself: ‘Every beautiful word – I said, whether I said it or I did not say it.’” On the basis of this statement, the authenticity of what is attributed to Muhammad and his teaching did not at all concern the ancient culture of Islam.
Nor were the many Islamic scholars who documented in their writings the Jewish connection and precedence on the Temple Mount disturbed by the fact that, when Islam came to the Mount, it received it “second-hand.” Surprising as it may seem, the leading Muslim religious scholars derived their ancient testimonies about the Jewish connection to the Temple Mount, on which the Temple stood, from a simple historical and religious understanding: the initial motive for the Temple Mount’s sanctification in Islam and the building of the mosques there was the return to the holy site on which the Temple stood, in an attempt to replace, there and in general, the “invalidated religions” – Judaism and Christianity – with Islam, the “supreme religion.” Nowadays, one can marshal an orderly set of sources and evidence for this fact, backed by historians, up-to-date research, and experts on Islam.
The most convincing sources for the existence of the Temple and for the precedence of the Jews on the Mount – which even Muslim “scholars” who now rewrite the history will find difficult to contend with – deal with the stage at which the Dome of the Rock was built: the era of the fifth caliph of the Umayyad dynasty, Abd al-Malik. These sources indicate a kind of “overlap” between the Jews and the Muslims regarding the Mount, as the Jews sought to familiarize them with the compound as well as the Foundation Stone and its boundaries. This help that the Jews offered in getting to know the Mount occurred immediately after the Muslims wrested the site from the common enemy, the Byzantines.
Furthermore, studies by well-known scholars, including leading contemporary researchers of Islam, tell us that in the early days of the Dome of the Rock, there were many similarities between the religious ceremonies conducted there and those that were practiced in the Temple.
The archaeologist Prof. Dan Bahat discusses these processes in his forthcoming book The Temple Mount: Topography, Archaeology, and History, which includes a chapter on the history of the Temple Mount during the Islamic era. “The Jewish sources,” Bahat notes, “almost all of them from the Cairo Geniza,” indicate that “it was the Jewish elders who showed the Muslims the boundaries of the Foundation Stone,” which was covered with garbage and sewage – boundaries from which the Muslims derived the dimensions of the Dome of the Rock, which was built above the ancient Rock.
The Muslims, who knew about the Jewish connection to the Temple Mount and to Jerusalem, respected the Jews during the first centuries of the existence of the Dome of the Rock and the al-Aqsa Mosque on the Temple Mount when carrying out maintenance work at the site – sweeping floors and carpets of the mosques, filling oil lamps, or cleaning the mikvahs there. There are not a few testimonies to this; one of them is by Mujir al-Din of the 15th century, whom the Muslims consider an authority on the ancient Islamic history of Jerusalem.
An additional, earlier testimony, apparently from the ninth century, is cited by Prof. Amikam Elad:
And it [the mosque?] had ten Jewish servants [religious functionaries] …. They multiplied and became twenty people…. They were employed in cleaning the refuse left by people in the times of pilgrimage and in the winter and the summer, cleaning the ritual bathing places around the al-Aqsa Mosque…. Apart from that, it had a group of Jewish servants who would make the glass for the lamps, the large cups…and other things in addition to that.
Another source, cited by Bahat, likewise attests that Abd al-Malik gave Jewish families permission to engage in maintenance work at the al-Aqsa Mosque and at the Dome of the Rock and also to pray at the gates to the Temple Mount. Bahat suggests that “first the Muslims allowed Jews to pray on the Mount…but later, apparently in the ninth century, they were expelled from it, but permitted to keep praying beside its gates.”
The Muslim author Ibn Abd Rabiah, “who wrote about Jerusalem,” Bahat notes, “attested already in 913, only about 200 years after the Dome of the Rock was built, that the Dome of the Chain on the Temple Mount, which today is identified as a Muslim element, was called by that name because in Israelite days a Jewish law court stood at the spot as well as a miraculous chain of justice, which the teller of a lie could not grip.” Another source cited by Bahat indicates that because the Jews constituted a certain force under the patronage of the conquering Muslims, they were given permission to build a house of prayer on the Mount but, after a short time, were removed from the spot by the Muslims.
As noted, in the early days of the Dome of the Rock, a cult was practiced there that was surprisingly similar to the cult practiced in the Temple. “The Muslims,” observed Dr. Milcah Levi-Rubin, a historian and scholar of the ancient Islamic era,
would anoint the stone with an incense offering, according to the instructions that are given in Talmudic sources. In the compound itself, Jews and Christians served, and the attire of the holy servants closely resembled the attire of the priests as described in the Bible: the tunics, the miter, and sashes made from precious and ornamented fabrics. The holy servants also purified themselves before the cult…. Apparently, in those early days, the Muslims saw themselves as the ones who practiced the cult of the Jewish Temple.
In her article “Why Was the Dome of the Rock Built? Between Beit al-Maqdis and Constantinople,” Levi-Rubin sums up this issue:
Important is…the fact – which Profs. Amikam Elad, Moshe Sharon, and Herbert Bosa have already discussed at length – that the customs and ceremonies that were practiced in the building in the early years resembled those practiced in the Temple; the similarity is evident in the special attire of those conducting the ceremonies [the priests], in the special status of Monday and Thursday, in the purification ceremonies that preceded the cult, in the way incense was used, in the call to prayer, and so on. Even though all these existed for a short time only, they clearly indicate the reason for the initial choice of the site.
Levi-Rubin adds further that “based on artistic features that are supported by Muslim sources, the two scholars, Priscilla Soucek and after her Raya Shani, found that from the start, the Dome of the Rock building was intended to be a reconstruction of Solomon’s Temple.”
In addition, Prof. Ofer Livne-Kafri, whose main field of research is the Arabic literature and Islamic culture of the Middle Ages, points out that Islamic traditions gave expression to the Jews’ anguish over the destruction of the Temple and to their hopes for its renewal by the Muslims. Many of these traditions appear in the literature of praises of Jerusalem (fadalal Beit al-Maqdis) that Livne-Kafri and others have researched. One of the most notable of these traditions, which eventually was censored and its Jewish background obscured, highlights the Jewish distress over the Temple’s destruction and Islam’s initial connection to Judaism. This tradition is quoted by Ibn Abu al-Muwali al-Mishraf ibn Abu al-Marja ibn Ibrahim al-Maqdisi (11th century):
Kab al-Ahbar [apparently a Jewish convert to Islam] found written in one of the holy books: [I have received word] that Jerusalem is Beit al-Maqdis and the Rock [the Foundation Stone] is called by some the Shrine [al-Heichal]. I will send you the slaves of Abd al-Malik and he will build you and ornament you. And I will restore to Jerusalem its rule as in the beginning and I will crown it in gold and silver and in precious stones. And I will send to you those I have created and I will place on the Rock my throne of honor. I am the sovereign God, and David is the king of the Israelites.
If that is not sufficient, there is another relevant fact that totally refutes the Palestinians’ current absolute denial of a Jewish connection to the Temple Mount and Jerusalem. Umayyad coins, on which the famous menorah of the Temple appears along with the text of the Shahada (the Islamic declaration of faith), likewise indicate how much the Muslims were influenced in their early days on the Temple Mount by its original owners – the Jews. These coins, which were minted during the Umayyad dynasty (661-750), were dated by researchers to the period between the time of Abd al-Malik and the beginning of the Abbasid era. They could even have been minted in Jerusalem, though that is not certain, but the coins bearing the menorah, a classic Jewish symbol, were undoubtedly minted by a Muslim government. Prof. Dan Barag found two types of coins from the Umayyad period that bore pictures of the menorah of the Temple. One of them showed a seven-branched menorah, the other a five-branched one. Dr. Yoav Parhi offered a possible explanation for the difference. He noted that a Baraita (an external tradition, not incorporated in the Mishnah) repeated in the Babylonian Talmud three times prohibits the making of a menorah similar to the one that existed in the Temple. “If we assume cautiously that these coins were minted [for Muslims] under Jewish influence or even by Jews,” Parhi conjectures, “then it is possible that the engraver – or someone responsible for the impressing – saw the presentation of the seven-branched menorah as forbidden and decided to alter it.”
The menorah as a Jewish-Muslim symbol. An Umayyad coin with the Jewish symbol of the menorah beside a text from the Muslim declaration of faith (collection of Abraham and Marian Scheuer Sofaer, Israel Museum, Jerusalem)
Substantiation of the many testaments offered here was also provided in 2016 by the archaeologists Asaf Avraham and Peretz Reuven. They published an inscription dating back over a thousand years that was discovered in the mosque of the village of Nuba near Hebron. The inscription attests that, at the onset of the Islamic era, the structure of the Dome of the Rock was indeed called Beit al-Maqdis in reference to the Temple that had stood there earlier. The ancient inscription was affixed above a prayer alcove in the mosque that was built in the days of Caliph Omar ibn al-Khatib (634-644 CE) and stated: “In the name of Allah the merciful and compassionate. This estate within its boundaries and domain [is a] sacred waqf of the Rock of Beit al-Maqdis and the al-Aqsa Mosque, which the emir of the believers, Omar ibn al-Khatib, sanctified to Allah the most high.” This discovery by the pair of researchers, which undermined the new and invented narrative of numerous Muslims about the absence of any Jewish connection to the Temple Mount, sparked the wrath of many Muslims, and the researchers bore the brunt of slanders, vituperation, and enraged reactions across the Arab world.
Thus, the use of the name Beit al-Maqdis is no coincidence. It stemmed, as we saw, from the influence of Jewish traditions on the development of Islam in its early days. Today, there is no educated Muslim who does not know that Jerusalem was called Beit al-Maqdis (from the Hebrew Beit haMikdash, or the Temple) for centuries. The two archaeologists who discovered the Nuba inscription had already spent years researching the “Jewish-Muslim connection” in the seventh and eighth centuries CE. They, too, like Bahat, Barag, and Parhi, have documented Muslim tools and coins bearing Jewish motifs, particularly the menorah, thus linking a quintessentially Jewish artifact to the ancient world of Islam.
The Dome of the Rock is Beit al-Maqdis, the “Nuba inscription” says in effect (Asaf Avraham)
Cooperation and Competition
The encounter between Jews and Muslims on the Temple Mount, then, goes back to the early days of Islamic rule there. It involved a mix of cooperation and competition. The historical sources say it was the aforementioned Jew, Kab al-Ahbar (Kab of the “comrades” or Jewish sages, who, according to many testimonies, converted to Islam), who guided Caliph Omar to the site of the Temple. According to Islamic traditions, it was Omar who collected and removed from the Temple Mount (along with others) much garbage and animal droppings, which the Byzantines had thrown there to insult the Jews. The scholar of Judaism, Judah Even Shemuel, found that some Jews viewed the Muslim conquest of Jerusalem as the beginning of the redemption and pinned great hopes on Omar ibn al-Khatib, builder of the first mosque (out of wood) on the Temple Mount. The Muslims, for their part, saw themselves as reviving the tradition of the ancient Temple of Solomon, whose existence they now deny.
Another Muslim identification of the Temple Mount as the site of the Temple can be found in the artistic domain, namely, sketches of the Temple Mount in Islamic manuscripts starting at the end of the 12th century. These drawings also manifest Islam’s self-conception as the successor of the Jewish religion. Prof. Rachel Milstein, who researched miniature works of art on religious subjects, discovered that the earliest depictions of Beit al-Maqdis were drawn or printed, using a wooden or metal board, on certificates that the pilgrims to Mecca held in their hands. Those pilgrims who added Jerusalem to their journey received at some point a supplement of pictures of al-Haram al-Sharif. The Temple Mount is drawn there as a horizontal row of cells, with the central one identified as “The Dome of the Temple.”
Gold, Pearls, Ruby, and Peridot
Archaeology, which explores the past of human civilization in light of findings from deep in the earth, also reinforces the historical depiction of the Temple on the Temple Mount. In a typical example, the words of al-Tabari from the ninth century, already quoted here, describe the Temple as made out of gold, pearls, rubies, and peridot. The text of the Persian scholar al-Tabari dovetails not only with the Jewish historical testimonies but also with the archaeological findings of the Temple Mount Sifting Project, which began early in the 2000s in Emek Tzurim National Park in Jerusalem. As part of this unique project, the researchers succeeded to reconstruct and reconstitute beautiful replicas of tiles from the flooring of the courtyards of the Temple. With their impressive appearance, these replicas correspond to the “landscapes” of the Temple that al-Tabari described in his writings.
Fragments of these tiles – colorful shards of flooring of the opus sectile kind, which were found in the earth of the Temple Mount – were dated with certainty to Second Temple days. They are believed to have served as flooring in porticos that surrounded the Temple compound, and in the large plazas where the numerous pilgrims who came to the Temple assembled. The floor tiles appear to have been laid there by foreign artists from Rome whom the Emperor Augustus sent to his friend King Herod (who renovated the Temple and expanded the Temple Mount in the first century BCE).
Archaeological evidence for historical sources. Reconstructions of the floor tiles in the Temple Mount’s courtyards in Herod’s days (courtesy of the reconstructor Frankie Schneider and Tzahi Dvira)
For the first time in archaeological research, then, the appearance of the floor of the magnificent Temple Mount in Herod’s time was reconstructed with a high degree of certainty, along with some of the most beautiful designs that ornamented the courtyards of the Temple Mount and its wings. The reconstruction was done by Frankie Schneider, a member of the team of researchers headed by the archaeologists Dr. Gabi Barkay and Tzahi Dvira. Apparently, then, archaeological evidence for the splendor described by al-Tabari has been found in the earth of the Temple Mount. This is a further, unique archaeological substantiation of the Talmud’s words about Herod’s Temple: “He who has not seen Herod’s building, has not seen a beautiful building in his life.”
This rare find (along with al-Tabari’s description) likewise corresponds to the description of the famous eyewitness Josephus Flavius, who saw this flooring with his own eyes: “Who can describe the flooring of these buildings, stones made from different and expensive stones, which were brought from all the lands in abundance…. And all of the plaza under the heavens was paved with colorful stones…. The uncovered courtyard was paved entirely with stones of different kinds and colors.” The tractate Sukkah of the Talmud also describes rows of “stones of black and white marble” from which parts of the Temple were built.
At the end of the 1990s, the earth in which shards of the floor tiles from the Temple’s courtyards were found was dug up from the Temple Mount by the Muslims in an outrageous manner and without archaeological supervision. The Waqf and the Northern Branch of the Israeli Islamic Movement broke into the underground recesses of what is called Solomon’s Stables and turned the place into a huge mosque. They removed enormous quantities of material from the spot in about 400 trucks, carrying about 9,000 tons of earth harboring archaeological relics from all the epochs of the Temple Mount’s history. The earth was dispersed in Jerusalem and its periphery, mainly in the riverbed of the Yarkon River. From there it was gathered, transferred to Emek Tzurim, and for more than 13 years, week after week, meticulously sifted by archaeologists and a record number of more than 200,000 volunteers. This extraordinary scientific-educational project was conducted with the approval of the Israel Antiquities Authority, the sponsorship of Bar-Ilan University, and with funding from the Ir David Foundation, and by the end of 2017, about 70 percent of the material had been sifted.
An arrowhead from the early days of the First Temple (tenth century BCE) (Courtesy of Tzahi Dvira, Temple Mount Sifting Project)A coin from the second year of the Great Revolt against the Romans, bearing the words “Freedom of Zion” (Courtesy of Tzahi Dvira, Temple Mount Sifting Project)A silver coin with the legend “The half-shekel” and “Holy Jerusalem.” First year of the Great Revolt against the Romans (Courtesy of Tzahi Dvira, Temple Mount Sifting Project)An arrowhead that apparently was used by the Babylonian army at the time of the destruction of the First Temple (Courtesy of Tzahi Dvira, Temple Mount Sifting Project)A truck unloads earth from the Temple Mount in Emek Tzurim, the sifting site over the years
Despite the great destruction that the Waqf and the Israeli Islamic Movement wrought by digging the pit in the earth of the Temple Mount, and despite the aggressive use of heavy tools and bulldozers to enable large numbers to enter the huge underground mosque built in Solomon’s Stables, the Sifting Project was able to salvage hundreds of thousands of tiny finds. These testified to the past of the Temple Mount and to the war and destruction it has undergone. Numerous articles about these finds have already been published; here we will briefly mention only some of them. They, too, refute the lie that seeks to erase the Jewish chapter from the history of the Temple Mount.
The volunteers devoted great quantities of time to the sifting work. They extracted from the earth of the Temple Mount an arrowhead from the early days of the First Temple that may have belonged to the fighting forces of King Solomon; slingstones, apparently from First Temple days, that may have been propelled by the Babylonians during the battle in which the Temple was destroyed, or may have been used a hundred years earlier during the siege of the city by Sennacherib, king of Assyria; a Babylonian arrowhead from the First Temple period; an arrowhead from the Hasmonean Hellenistic period – perhaps a memento from the battle in which Judah the Maccabee liberated the Temple Mount; an arrowhead that was shot by the Roman army during the Siege of Jerusalem (70 CE); arrowheads from the Crusader conquest; as well as testaments to later battles: Ottoman, British, and Israeli bullet casings.
Also found were about 7,000 coins. Nearly half of them were cleaned, and about 17 percent of those were dated to Second Temple days and to other periods that preceded the Islamic era. Silver coins were found from the Persian period (fourth century CE), as well as coins from the time of Antiochus IV Epiphanes or “Antiochus the Wicked,” on which his image appears (second century BCE). It was Epiphanes who foisted the harsh decrees on the Jews that led to the Maccabean Revolt. Also discovered were coins from the Great Revolt against the Romans (68 CE) bearing the inscription “Freedom of Zion.”
Another rare coin that was dug from the earth of the Temple Mount and stirred special excitement was minted during the first year of the Great Revolt in 66-67 CE. On the front of the thick coin, which is made of silver, appears a branch with three pomegranates and an inscription in the ancient Hebrew writing: “Holy Jerusalem;” on the back is the inscription “half-shekel,” the cup of the Omer offering, and above it the letter Aleph to mark the first year of the revolt. Half-shekel coins were used to pay the Temple taxes, and during the revolt, they replaced the Tyrian shekel. These coins apparently were minted on the Temple Mount itself by the Temple authorities. This marked the first time a coin of this kind was located in the earth taken from the Mount itself. The discovery substantiated the ancient text from the Mishnah in the Shekalim tractate, which is based on chapter 30 of the book of Exodus, which tells how every male Israelite was required to pay a half-shekel to the sanctuary.
Another find from the Temple Mount Sifting Project, with a direct connection to the First Temple, is a small clay stamp seal, originally attached to a cloth sack that apparently contained pieces of money or silver. The stamp seal bears the inscription: “[…]liyahu [ben] Immer.” The Immers were a well-known family of priests at the end of the First Temple era, from the seventh to the beginning of the sixth century BCE. Pashur ben Immer is mentioned in the Bible as “chief governor in the house of the Lord” (Jeremiah 20:1). In the view of the archaeologist Tzahi Dvira, “This seal was used to stamp luxury items that were kept in the treasury of the Temple, which was administered by the priests. This stamp seal is the first Hebrew inscription that was ever discovered from the First Temple and constitutes direct evidence of the administrative activity of the First Temple priests.”
Also collected from the earth of the Temple Mount were dice used by the Romans, with which the guards of the Mount apparently passed their time, as well as architectural fragments with engraved ornaments from Second Temple days, some of which seem to have been incorporated into the Temple itself. Also found were tens of thousands of animal bone fragments, many of them scorched, which may have been burned in the fire of the altar and possibly in the fire that destroyed the Temple.
A Mikvah under Al-Aqsa
Along with the Waqf’s destruction of the antiquities on the Mount and the Israeli authorities’ failure to prevent it – as reported extensively in the media over the years – a whole array of archaeological discoveries, uncovered in the course of the unruly and unsupervised activity of the Waqf and the Muslims on the Temple Mount, were kept from the eyes of the public. These were not intentional discoveries resulting from an organized archaeological excavation. The Israel Antiquities Authority walks on the Mount on tiptoes, like a disabled person with tied hands. The Israeli authorities since 1967 and the Jordanian authorities from 1948 to 1967, and even the British authorities from 1917 to the establishment of Israel, refrained from digging on the Mount. The Muslims did not permit it. Nevertheless, over time, as a result of ongoing building and maintenance, random discoveries – some of them sensational – were made by none other than the Muslims that were documented by the authorities and various researchers. The discoveries almost always stemmed from observations by visitors or partial and unofficial supervision by people from the Department of Antiquities (later the Israel Antiquities Authority). Most of this material was “buried” in the supervisory files of the authority, or in the Mandatory archive of the Department of Antiquities. For many years it did not come to light, mainly to avoid embarrassing the Muslims by publicizing Jewish and Christian chapters from the history of the Temple Mount that the archaeological finds substantiate.
For example, only a few years ago, the archaeologist Tzahi Dvira published new information from the various random digs on the Temple Mount over the past hundred years. Although the article came out in a scientific journal of Bar-Ilan University, Hidushim b’Heker Yerushalayim, the media gave it almost no mention. Dvira burrowed into the photograph archive of the Mandatory Department of Antiquities and found treasure there. He discovered a stack of photographs and abundant documentation that the director of the department, Robert Hamilton, gathered in the course of the extensive renovations of the al-Aqsa Mosque by the Waqf from 1938 to 1942. The renovations were needed because of the earthquakes that occurred in 1927 and 1937. Hamilton’s wide-ranging book on the al-Aqsa Mosque, published in the middle of the last century, contains almost no trace of these materials; Hamilton simply ignored them. What is common to all these discoveries, Dvira points out, is that “they precede the ancient Arab period.” He surmised that in the Mandate period, just like today, examination and documentation were dependent on the mercies of the Waqf; hence the British researcher chose not to publish findings indicating that important, non-Muslim, public buildings that preceded the mosque stood on the site.
The mikvah under al-Aqsa(archive of the Mandatory Department of Antiquities, Israel Antiquities Authority)
Under the eastern gate of the current mosque, for example, Hamilton found a plaster cistern with a staircase leading down to it, which apparently served as a Jewish mikvah in Hasmonean times. Along the staircase were visible remnants of a partition similar to numerous partitions that were found among the Jerusalem mikvahs.
The British director of the Department of Antiquities was not the only one who was loath to publish such findings. The Israel Antiquities Authority was also very cautious about publishing “incidental” findings that were made in the course of work by the Muslims, both so as not to embarrass the Waqf and so that, in the future, the Waqf would not prevent its workers from documenting similar incidental findings. The examples are numerous and extend from the first years after the Six-Day War to the present.
In 1970, when the Waqf dug an emergency pool for putting out fires after the Christian Australian Michael Dennis Rohan set fire to the al-Aqsa Mosque, it uncovered at the site a large pit, an access trench beside it, and an ancient wall whose stones were reminiscent of Herodian stones (or, according to another opinion, a retaining wall or barrier from First Temple days). These findings, which were documented in real-time by the archaeologist Ze’ev Yevin, were registered in the files of the Israel Antiquities Authority and revealed only eight years later by the Temple Mount researcher Prof. Asher Kaufman.
In the summer months of 2007, the Waqf dug two 200-meter trenches in the most sensitive location on the Temple Mount, the elevation where the Dome of the Rock stands – and where, most of the researchers believe, the Temple stood. This dig, too, yielded a series of archaeological discoveries. Personnel of the Israel Antiquities Authority, who saw these reported on foundations and fragments of the Herodian columns, on tools from the ancient Muslim period, and on flooring and trenches from antique times. They told of many shards, some of which were stolen by local Muslims, and even of a drainage canal quarried from rock and covered with stone slabs about which nothing was known – a finding that managed to surprise the archaeologists.
The most sensational incidental find, which occurred in 2007 and was partially publicized by the Israel Antiquities Authority (with special approval from the prime minister at that time, Ehud Olmert), was a sealed layer of ground from the First Temple period. In the archaeologists’ view, it was “preserved as a homogeneous whole from First Temple Days, and the shards that were identified there were preserved at that location and had remained unchanged since First Temple Days.”
The first announcement that the authority issued gave few details about the nature of the dramatic find, but noted that it had been examined by a special team that included, among others, Prof. Ronny Reich of the University of Haifa, Prof. Israel Finkelstein of Tel Aviv University, and Prof. Seymour Gitin, director of the Albright Institute of Archaeological Research. Only years later, at an archaeological conference in Jerusalem in 2016, did the director of the Jerusalem Region in the Israeli Antiquities Authority, Dr. Yuval Baruch, reveal that the most important finds related to the “sealed layer of ground” were a group of potsherds, fragments of bowls and cooking pots and jugs, that had been dated to the end of First Temple days (at the time of the Kingdom of Judah). Beside them were found bones of cattle and other animals as well as olive pits. The pits were sent for a carbon-14 test, without the technicians being informed that the source of the finds was the Temple Mount. The results matched the dating of the potsherds to 2,500-2,600 years ago. Here too, the importance of the finds lay in their precedential nature: this marked the first time a sealed layer of ground from the First Temple era was found on the Mount. The discovery also provided a possible archaeological basis for reconstructing the Temple Mount compound in that era.
The publicizing of this extraordinary information was disconcerting to Muslims who have been denying for years any connection between the Jewish people and the Temple Mount. The director of the Jerusalem Waqf Department, Azam al-Khatib, hastened to deny the possibility that the finds were indeed from the First Temple period. He explained the announcements as an act of deception aimed at bolstering the claim of Israeli sovereignty over part of the al-Aqsa compound. Member of Knesset Ibrahim Sarsur reacted similarly.
The Victory Arch of Flavius Silva
Another surprising and convincing find revealed evidence of a victory or memorial arch that the Romans built on the Mount after they destroyed the city and demolished the Temple. The find was documented by the Hungarian archaeologist Tibor Grull, who was in Israel in 2003 for his studies at the Albright Institute. During a visit to the Temple Mount, Grull accidentally discerned a stone slab, a fragment of a monumental inscription, with Latin writing on it. He approached the slab and, to his surprise, saw on it the name of the Roman governor Flavius Silva, destroyer of Masada, who is also mentioned in the writings of Josephus. The source of the slab was in Solomon’s Stables; it had already been uncovered in 1996 when the Waqf lowered the ground level of the Stables. The Hungarian archaeologist asked the Waqf for permission to document and photograph the find, and anomalously it was granted. In 2005 Grull published the finding in the mouthpiece of the Albright Institute. The item itself is currently stored in the Islamic Museum on the Temple Mount but is not shown or accessible to visitors.
Another eye-opening find, which the Muslims tried to obscure, was revealed anew by Dr. Orit Peleg-Barkat of the Hebrew University’s Institute of Archaeology. Peleg-Barkat’s doctoral dissertation dealt, among other things, with arched roofs of the passageway of the Huldah Gates (specifically the western Huldah Gate). In Second Temple days, especially during the three pilgrimage festivals, many pilgrims would enter the Temple through this passageway.
The etchings on the arched roofs of the passageway are tinged today with a thin and transparent layer of lime and decorated with plant and geometric designs. The arched roofs are within the territory of the Temple Mount, beyond its southern wall, in the space that is called al-Aqsa al-Kadim (“ancient al-Aqsa”). An archaeological expedition led by Benjamin Mazar already documented the arched roofs in the 1970s. Peleg-Barkat visited the spot again in 2004 and photographed them anew. In her work, she contests the claim that this passageway, with its decorations, is a relic of the Umayyad era, which came later.
A relic from the Herodian period. The decorated passageway of the Huldah Gates, through which pilgrims passed on their way to the Temple. Located today in the “ancient al-Aqsa” mosque. (Nadav Shragai)
After scrutinizing the decorations on the arched roofs, Peleg-Barkat found that the style of etching and the assortment of designs have clear parallels in the art toward the end of the Second Temple period. This examination, she concluded, “decides positively the date of the building in the days of Herod,” and therefore: “The copyrights for planning and decorating the passageway of the gate belong to artists and architects who worked in King Herod’s service.” She added, “The decorated entrance hall of the ‘Huldah Gates’ with their four arched roofs is the most complete remnant that has been preserved until now from the Herodian compound of the Temple Mount.”
Peleg-Barkat photographed and researched another intriguing architectural item that was located on the inner side of the Southern Wall, within Solomon’s Stables: a fragment of a cornice, likewise decorated with plant and geometric designs, of which secondary use was made at the time the Stables were built. Peleg-Barkat assessed that the source of the fragment was in the royal portico. According to Josephus, Herod built it at the southern edge of the Temple Mount plaza. The part that is visible in the Stables (today a mosque) belongs to the upper part of the cornice. It is decorated with two strips, one bearing a design of grapevine branches.
To all these should be added the tale of the four inscriptions that tell and substantiate, each in its own way, the story of the Temple and its existence on the Temple Mount. The Israel Museum displays a fragment of an inscription in Greek from the Second Temple period, which was found in 1935 during work on the road beside the Lions’ Gate, next to the Temple Mount. A similar inscription, preserved in its entirety, is now at the Istanbul Archaeological Museums. This one was found in 1871 on the wall of an Arab school north of the Temple Mount – on repurposed building stone that, used in building the school, was identified as a remnant of the Temple.
Both inscriptions forbid non-Jews to go beyond the grille that surrounded the Temple, threatening violators with death: “No foreign person will enter through the partition that surrounds the Temple to the surrounding courtyard, and whoever is caught will forfeit his life and will die.” These inscriptions are mentioned in descriptions of the Temple in Josephus’ book The Jewish War. Regarding the grille, Josephus writes that “whoever went past it [into the Temple Mount] to the second sanctified domain reached a stone partition that surrounds it, three cubits in height, that was very elegant.” At different points on the grille, stone slabs were affixed that warned – some in Greek letters, some in Latin letters – about the law of purity, which forbade non-Jews to enter the sanctum.
(courtesy of Dr. Eilat Mazar. All rights reserved to Dr. Eilat Mazar)Inscription for the “House of the Tekiah” (the trumpet blast) (courtesy of Dr. Eilat Mazar. All rights reserved to Dr. Eilat Mazar)
Another inscription was discovered in excavations conducted after the Six-Day War by Prof. Benjamin Mazar, near the point where the Temple Mount’s Southern Wall and Western Wall converge. The inscription, which was found in fragmented form, was engraved on a stone that in Second Temple days was at the southwestern foundation of the Mount, and said: “To the House of the Tekiah [i.e., the trumpet blast] to [distinguish between sacred and profane].” It was above this foundation that the priest stood on Friday when he announced with a trumpet blast (tekiah) the entry of Shabbat, and the next day, with another blast, announced its departure. This practice is also documented by Josephus and the Mishnah.
Not far from the inscription “To the House of the Tekiah,” on the third level under the foundation of Robinson’s Arch, in the middle of the arch and upon the Western Wall, there is an engraving of two lines in Hebrew script based on Isaiah 66:14: “And when you see this, your heart shall rejoice, and your bones shall flourish like an herb.” As Prof. Benjamin Mazar suggested, this inscription may have expressed the inner hopes and sentiments of Jews who came to Jerusalem in the fourth century during the era of the Roman emperor Julian the Apostate, who allowed the Jews to renovate the ruins of the Temple.
Another spectacular find, while not confirming the Temple’s existence, corroborates the version of the Priestly Blessing that we know from the Torah, a version that the priests already made use of in the Temple. The archaeologist Dr. Gabriel Barkay discovered two tiny, rolled-up silver scrolls that served as amulets and contained the most ancient biblical Hebrew text ever found, namely, verses of the Priestly Blessing from the book of Numbers: “May the Lord bless you and keep you; may the Lord cause his face to shine upon you… and grant you peace.” The two scrolls were found in a burial cave from First Temple days in Ketef Hinnom in Jerusalem.
Still another discovery, which enriches our knowledge, was made during rescue excavations conducted a few years ago by the Israel Antiquities Authority in the Ramat Shlomo neighborhood of Jerusalem: an ancient quarry at least about an acre in size. The excavation was part of work mandated by the City of Jerusalem to enable a school to be built for the neighborhood’s children. At this site, huge stones were quarried for purposes of governmental buildings in Second Temple Jerusalem. The quarry’s uniqueness lay in the huge size of the stones, which were up to eight meters long and similar in that regard to stones preserved in the lower sections of the walls of the Temple Mount. This marked the first time, and so far, the only one in which such a well-preserved quarry was discovered, one that can be linked to the enormous building projects of Second Temple Jerusalem. It was the use of such giant stones during the building of the Temple Mount compound that kept the structure stable for two thousand years, with no need for mortar. Also discovered in this excavation were coins and earthenware shards that were dated to the peak period of the building projects in Second Temple days.
Along with all these, stand as mute and lofty testaments, the walls of the Temple Mount, a part of the ancient landscape of Jerusalem to whose presence the eye is so accustomed that many of us have forgotten that they, too, are part of the evidence for the Temple’s existence. All agree that the walls were built during the period of Herod and his successors. Their location and form well fit the description in Josephus’ writings, which, in turn, is consistent with three more exciting finds. These were discovered by the archaeologists Ronny Reich and Eli Shukrun in the Herodian drainage tunnel that ascends from the Shiloach Pool (under the Herodian Road) to the foot of the southern corner of the Western Wall:
A gold bell that was dated to Second Temple days – a unique find of a sort never discovered in any other archaeological dig. It is reminiscent of the bells that were sewn onto the High Priest’s clothing as described in chapter 28 of the book of Exodus.
A sword of a Roman legionnaire in a leather scabbard.
An etching on a potsherd of the menorah of the Temple Mount. The anonymous artist probably saw the menorah with his own eyes before etching its form in the clay while taking refuge in the drainage tunnel under the Herodian Road, fearful of the Romans who pursued the remnant of the rebels who were hiding there.
Here pilgrims were purified on their way to the Temple. A mikvah from Second Temple days at the foot of the Southern Wall of the Temple Mount (courtesy of Dr. Eilat Mazar. All rights reserved to Dr. Eilat Mazar)
In addition to all these are the dozens of ancient mikvahs from Second Temple days that were discovered at the foot of the southern wall of the Temple Mount. They, too, are part of the array of evidence and testimonies about the existence of the Temple at the site. The historical evidence indicates that pilgrims purified themselves in these mikvahs before entering the sacred space of the Temple on the Mount.
Despite all this, today, numerous Palestinians and Muslims claim that there is no archaeological find that confirms the existence of the Jewish Temple on the Temple Mount and that on the Mount itself, no remnants of the Temple have been found. They are right and wrong: while, indeed, no clearly identifiable remnant from the Temple itself has been preserved, the wealth of items testifying to the fact of its existence on the Mount — only a few of which were reviewed here — indicate that many Palestinians and Muslims are not speaking the truth. The lack of relics from the Temple itself stems from the fact that the Muslims have never allowed an organized archaeological excavation on the Mount. For many years, they have been trying to have it both ways: both forbidding excavations and asserting that no relics exist.
Yet the numerous incidental finds from the different parts of the Mount – including from the Sifting Project, which is the closest thing to an archaeological dig there – along with the many archaeological finds from around the walls and foundations of the Mount are enough to make clear that such claims are baseless. The attempts by Palestinian leaders like Yasser Arafat or Saeb Erekat to cast doubt on the Temple’s existence on the Mount or to distance it from that location by claiming that there was indeed a Temple, but in Nablus or Yemen, stem from one sole motive: their desire to expunge from the Temple Mount a competing Jewish historical narrative and a competing historical and religious awareness, since these could becloud their own historical and religious narrative on the Mount.
That is also why, in recent years, the Palestinians have not only been rewriting Jewish history but their own Muslim history as well.
After al-Aqsa, which is mentioned in the Night Journey of Muhammad (Surah 17 in the Koran), was identified in the prevailing Muslim exegesis as Jerusalem, the city became the third holiest place to Islam. In the Islamic tradition, Jerusalem was third in virtue and importance after Mecca and Medina. About these three cities, it is said, “One prayer in Mecca is equal to ten thousand prayers. A prayer in Mecca is equal to a thousand prayers, and a prayer in Jerusalem is equal to 500 prayers.” According to modern research, the Umayyad Caliph Abd al-Malik built the Dome of the Rock in 691, about 60 years after Jerusalem was conquered by the Arabs. The al-Aqsa Mosque was built in 705 by the Umayyad Caliph al-Walid, son of Abd al-Malik. Since that time, more than 1,300 years ago, the two buildings have become an inseparable pair. The Dome of the Rock building, which was not originally a mosque, came to preserve and exalt the holy Foundation Stone. Within the Dome of the Rock, Muslims usually engaged in individual prayers. The al-Aqsa Mosque, however, was a place of prayer for the general public.
An Invented Narrative; a Rewritten History
To contend with the “Jewish story” that preceded the Temple Mount, many Palestinians and Muslims have altered the age of al-Aqsa and transposed it to the pre-Islamic era. A researcher of this change, Prof. Yitzhak Reiter, noted that “this was part of the attempt to ‘convert to Islam’ the period that preceded the period of the heralding of Islam by Muhammad, and to ‘Arabize’ Jerusalem and the Land of Israel. The process of Islamization and Arabization was driven by the need to claim a historical, Arab, and Islamic right to the sacred ground, before the Israelites – the aboriginal Jews – and the Christians were there.” To that end, old traditions were enlisted that attribute the building of al-Aqsa to Abraham, to the First Man, and to the time of the creation of the world.
The new Muslim narrative asserted, for example, that the al-Aqsa Mosque was not built somewhat more than 1,300 years ago – as modern research maintains – but, instead, by the “First Man” 40 years after the mosque in Mecca was built. The Jordanian Waqf Minister Abd al-Salam al-Abadi claimed this already in 1995. The Saudi historian Muhammad Sharab likewise affirmed that al-Aqsa was built by the First Man, and so did the former mufti of Jerusalem and the Palestinian Authority, Sheikh Ekrima Sabri. According to Sabri, Solomon did not build the Jewish Temple but rather the al-Aqsa compound, which is a Muslim mosque. In recent years spokesmen of the Northern Branch of the Israeli Islamic Movement have stated that it was Abraham who built al-Aqsa about 4,000 years ago, 40 years after he built the Kaaba in Arabia with his son Ishmael.
Thus, to “Islamicize” the era that preceded the phenomenon of Muhammad’s proclamation of Islam, ancient Islamic traditions were recruited that were of negligible importance until that time, and more ancient strata were devised concerning the al-Aqsa Mosque, dating back long before the year it was constructed and before, of course, the presence of the Israelites in the Land of Israel. In recent years some Muslim figures have also, surprisingly, defined al-Aqsa for the first time as second, not third, in holiness – that is, after Mecca but before Medina. That view has been propounded, for example, by Sheikh Kamel Rian of the Northern Branch of the Islamic Movement.
To the varied archaeological and the ancient and numerous Muslim sources that identify the Temple Mount as the site of the Temple – notwithstanding the rewritten versions of Jewish and Muslim history – may be added, of course, a plethora of known and documented historical sources. These corroborate the Jewish connection to Jerusalem and the existence of the Temple. And while they are not the main subject of this work, the Jewish sources cannot be omitted: the Hebrew Bible, the Mishnah, the Gemara, the Midrashim, and multiple Jewish commentators all attest to the fact of the Temple and its existence for many years on the Temple Mount in Jerusalem. Some of the most important sources in this regard are in the tractate Middot of the Mishnah, which sets forth the dimensions of the Temple Mount, and even mentions the job of “Temple Mount man,” who along with other responsibilities was in charge of the shifts of the Levites who were stationed at the five gates of the Mount. Another tractate of the Mishnah, Parah, mentions the Temple Mount as the last station on the Path of the Bulls, who carried, from the Shiloach Pool, the pure, water-carrying children to the ceremony of the slaughter of the Red Heifer on the Mount of Olives opposite the Temple Mount. Another example is the Mishnah’s description of the rituals of the bringing of the First Fruits to the Temple on the Temple Mount.
To all this should be added the already-mentioned writings of the historian Josephus, who saw the Temple and its destruction with his own eyes. Josephus describes the Second Temple on the Temple Mount at great length, as well as the Roman victory procession in which the booty of the Temple implements was carried away. This procession is also recorded on an arch, built by Titus in Rome, that commemorates the conquest of Jerusalem in 70 CE. Engraved on the Arch of Titus are pictures and reliefs of the Temple implements as they are borne off by figures of Roman soldiers. In addition, the outstanding study by the Temple Mount researcher Prof. Asher Kaufman, The Temple Mount: Where Is the Location of the Holy of Holies?, published in English in 2004, sheds clear light on the Temple’s location on the Mount and the Jewish connection to it. Kaufman also elucidates the facts about the Holy of Holies, the place most sacred to Jews in the world, in general, and within the domain of the Temple Mount and the Temple, in particular.
The current Muslim insistence on erasing any connection between the Jewish people and the Temple Mount and on totally denying the existence of the Temple there also denies the history of Christianity and its sources. The New Testament contains more than 20 references to Jesus and his disciples in the Temple on the Temple Mount. In one of his articles, the historian Prof. Yaron Zvi Eliav notes that important episodes in Jesus’ youth occurred in the Temple. The adolescent Jesus stood out among the students who studied the Torah in the Temple. Simeon gives his blessing and foresees the messiahship of the baby Jesus at the time of korban hayoledet (the sacrifice by the woman who has given birth) and pidyon haben (the redemption of the firstborn) in the Temple. Some of the traditions also locate one of the temptations of Christ at a parapet of the Temple. Especially notable in this regard is the main and significant phase of Jesus’ last journey – a series of events that brought his life to its apex: the last supper, the trial, the crucifixion, and the resurrection, all of which occurred in Jerusalem.
The height of absurdity is reached by the Palestinian denial of the Jewish history on the Temple Mount, and thus incidentally of the Christian site referred to as “the Cradle of Jesus.” The cradle is a marble recess from the Roman era within an alcove at the southeastern corner of Solomon’s Stables on the Temple Mount. The Christian tradition, which was adopted by the Muslims, views this recess as the place where Jesus was laid after his mother, Mary, presented him at the Temple 40 days after his birth. In previous centuries, Muslim pilgrims would visit the spot and read Surah Maryam of the Koran beside it. The Muslims still identify the place as “the Cradle of Jesus” despite the fact that Jesus was a Jew and his history is inextricably linked to the Temple on the Temple Mount, whose existence, at that location, they now deny. To resolve this difficulty for themselves, in recent years, the Palestinians have begun to define Jesus as a Palestinian, sometimes even as “the first Palestinian martyr.” This stance – which contravenes historical research and the Christian faith – has been adopted, for example, by figures like Yasser Arafat, Jibril Rajoub, or the Mufti of Jerusalem Sheikh Muhammad Hussein. They and many other Palestinians do so despite the fact that the term “Palestine” appeared for the first time in history when the Romans changed the name of the Judea province to Syria Palestina as a punishment to the Jews after the Bar Kochba Revolt, that is, more than 130 years after Jesus’ birth. From a chronological-historical standpoint, the conjunction of the words Jesus and Palestinian is an impossibility, and it is clear that this is an invented identity.
On the Jewish connection to the Mount and the Temple, major pagan sources and plentiful Christian sources testify to it. The Temple is indeed referred to by pagan historians who viewed it with their own eyes and were not influenced by the Jewish or Christian traditions. Examples include Berossus (third century BCE), who mentioned Nebuchadnezzar, king of Babylonia; Hecataeus of Abdera (around 300 BCE), who slandered the Jews by saying they bowed down to a statue in the form of a donkey that was in the Temple; Menander of Ephesus (second century BCE), who mentions Hiram, king of Tyre and Solomon; Mamsis of Petra (around 200 BCE); Diodorus Siculus (from Sicily, first century BCE), who describes the siege of Jerusalem by Antiochus VII; Strabo (first century BCE); Tacitus (first century BCE), who describes the Temple, and many others.
Later, important Christians also attest to the Jewish connection to the Mount and the Temple. The traveler from Burdigalense (in the year 333) describes an annual Jewish ceremony beside “the perforated stone” of the Western Wall or the Temple Mount. The monk Bar Tzoma (fifth century) tells of an annual Jewish celebration on Sukkot on the ruined Temple Mount. Hieronymus, one of the Church Fathers (fourth century), refers in his writings to a Jewish strictness about annually observing Tisha B’Av, the day of the destruction of the Temple. The Armenian Bishop Sebeos (seventh century) also mentions the Temple in his writings, and so does the Byzantine historian and monk Theophanes (eighth century), who describes how Omar seized control of “what was in the past the Temple that Solomon built.”
Thus, both the numerous Christian sources and the even more numerous ancient Muslim sources contravene the contemporary Muslim denial of any Jewish connection to the Mount and to Jerusalem. Against this backdrop, it is easy to understand the persistent Muslim refusal to allow archaeological digs – even painstakingly cautious ones – on the Temple Mount. This refusal has already been rooted for generations in the fear of the collapse of the bogus Muslim exclusivity to the place, and the possibility that archeological evidence will be found for the precedence and the fact of Jewish existence there.
This is also the soil from which a contemptuous Muslim attitude grew, as in the response of a qadi to a comment by Kaiser Wilhelm II, who visited the Mount in 1898. When the Kaiser expressed regret about the fact that “there are no excavations at such an important site,” the qadi who escorted him raised his eyes to the skies and said it was desirable “that a person should direct his eyes and his thoughts upward, to the heavens, instead of downward to the depths.” This is also the fertile ground of much later statements, like the one in 2009 by Kamel Khatib, then deputy chairman of the Northern Branch of the Israeli Islamic Movement, when he promised the Jews that “Tisha B’Av – their national day of destruction – will continue forever.” But one thread runs through these words and the many similar words: fear, not to say dread, of the possibility that their lies will be exposed.
Bibliography
The full bibliographical list for this appendix is in the book by Nadav Shragai, Al-Aqsa Terror: From Libel to Blood (Jerusalem: Jerusalem Center for Public Affairs and Sella Meir, 2020), 331-337 (Hebrew).
Books (in Hebrew unless noted otherwise)
Shaul Bartal. The Way of Jihad in Palestine. Jerusalem: Hotza’at Carmel, 2012.
Shmuel Berkowitz. How Awesome Is This Place. Jerusalem: Carta, 2006.
Hillel Cohen. 1929. Jerusalem: Hotza’at Ivrit and Keter, 2013.
Nissim Dana. To Whom Does This Land Belong? Ariel, Israel: Ariel University, 2013.
Jerusalem Center Fellow Nadav Shragai served as a journalist and commentator at Ha’aretz between 1983 and 2009, and is currently a journalist and commentator at Israel Hayom. He has documented the dispute over Jerusalem for thirty years. His previous books include: Jerusalem: Delusions of Division (2015); The “Al-Aksa Is in Danger” Libel: The History of a Lie (2012); the ebook Jerusalem: Correcting the International Discourse - How the West Gets Jerusalem Wrong (2012); At the Crossroads: The Story of Rachel’s Tomb (2005); and The Temple Mount Conflict (1995).
His latest book is Al-Aqsa Terror: From Blood Libel to Bloodshed (Hebrew). Brig.-Gen. (ret.) Shalom Harari described the book as “a fascinating study how the big lie is employed again and again to provoke waves of terror.” Amb. Dore Gold said: "The book has a deep analysis of the false and dangerous myth that turned into a battle cry by terrorists who in its name went to carry out terror attacks on Israelis.” | It stands against claims that invert the truth, according to which “the legend of the bogus Temple is the greatest crime of historical forgery,” and against entire books that have been written in that vein.
In his book History of the Prophets and Kings, al-Tabari refers several more times to the Temple Mount as the site of the Temple, and also identified Isaac, not Ishmael, as the hero of the “Binding of Isaac” story. The famous commentator described David’s and Solomon’s involvement in building a mosque on the Temple Mount in a way that corresponds exactly, in not a few details, to the Bible’s description of the process of building the Temple. This description is typical of other, similar descriptions in Islam that point to a strong, ongoing connection to Jewish traditions.
David wanted to begin building the mosque and Allah disclosed to him: It is indeed a sacred structure. You defiled your hands with blood and will not build. But you will have a son whom I will coronate after you and his name will be Solomon. Him I will cleanse of the blood. When King Solomon built the mosque and sanctified it, David was a hundred years old, when he heard of the Prophet Muhammad…. The period of his kingship was forty years.
For al-Tabari, Solomon (Suleiman ibn Daud [David]) is the main prophet responsible for the construction on the Mount, where the Muslims built their mosques.
The Muslim geographer Muhammad al-Idrisi, who visited Jerusalem in the 12th century, likewise described “the Temple Mount that Solomon ben David built.” He added that “in the vicinity of the eastern gate of the gates to the Dome of the Rock is the shrine that was called the Holy of Holies, and it is impressive to look upon.” | yes |
Ancient Civilizations | Was the Tower of Babel a real construction? | yes_statement | the "tower" of babel was a "real" "construction".. the "tower" of babel actually existed. | https://www.ancient-origins.net/news-history-archaeology/ancient-babylonian-tablet-provides-compelling-evidence-tower-babel-did-021378 | Ancient Babylonian Tablet Provides Compelling Evidence that the ... | Suggested Books
PARTNERS
Ancient Babylonian Tablet Provides Compelling Evidence that the Tower of Babel DID Exist
Half the world seems to say the Bible is pure bunk, while the other half says it’s, well, the word of God. Now comes a professor who isn’t religious to say that a baked tablet from ancient Babylon gives evidence that the biblical tower of Babel was real. And his evidence is quite persuasive.
In linguistics, there is a theory that there was a single, original language spoken by humankind. The Bible’s book of Genesis, Chapter 11, hews to that line too, in the passage about the tower of Babel.
Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar [Babylonia] and settled there.
The people decided to build a tower to the heavens to make a name for themselves and avoid being scattered around the world. But the Lord observed this tower’s construction and thought if his people could build this with one language, they could do anything. God decided to prevent them by scattering them around the world and imposing many languages on them.
No doubt the Bible story is quite different from the linguistic theory.
But as for the tower, Andrew George, a professor of Babylonia at the University of London, thinks he has found solid evidence for it in an ancient baked tablet from the city of Babylon.
In a video on Smithsonian.org, he details his theory, and it all sounds very plausible:
The baked clay tablet that Dr. George examined, discovered a century ago in Babylon (now modern-day Iraq) and now privately held, shows what the ziggurat looked like, with its seven steps. It shows the king with his conical hat and staff. And below is text that describes the commissioning of the tower’s construction.
“This is a very strong piece of evidence that the tower of Babel story was inspired by this real building,” Dr. George told Smithsonian. “At the top … there is a relief depicting a step tower and … a figure of a human being carrying a staff with a conical hat on. Below that relief is a text which has been chiseled into the monument, and the label is easily read. It reads:
Etemenanki, Ziggurat Babel.
“And that means ‘the Ziggurat or Temple Tower of the City of Babylon.’ The building and its builder on the same relief,” the professor says.
A reconstruction of the tower of Babel from a Smithsonian video screenshot
The text gives an account of the people enlisted to construct the tower, as translated by Dr. George:
From the Upper Sea [Mediterranean] to the Lower Sea [Persian Gulf] the Far-Flung Lands and Teeming Peoples of the Habitations I Mobilized In Order to Construct This Ziggurat of Babylon.
The Smithsonian video says this tablet gives further proof that the tower of Babel was an actual building.
“After Darwin cast a doubt on the story of a six-day creation, people began to ask what else in the Bible might not be true,” Dr. George told Breaking Israel News. “In the 19th century there was a discovery that the Assyrian kings described in the Bible were real and corroborated by archaeological evidence, making us ask now, how much more in the Bible is true?”
Experts had already thought King Nebuchadnezzar II actually did build a ziggurat in Babylonia after he established the city as his capital. The tablet provides more evidence.
The Tower of Babel by Pieter Bruegel the Elder (1563) ( Public Domain )
The city of Babylon had been founded around 2300 BC about 80 miles south of present-day Baghdad. The Hittites sacked Babylon in 1595 BC, but Nebuchadnezzar began rebuilding the city in 612 BC, constructing the new edifice around an older tower.
Archaeologists think the tower of Babel was 300 feet along the sides and 300 feet tall. Only a fraction of the building remains, scattered and broken.
Top image: The baked tablet that had been deciphered by Dr George. It is finely carved with a relief showing the king and tower and chiseled with text saying how people were gathered from all over to construct the ziggurat. ( Smithsonian screenshot )
Comments
People did not move eastward in Genesis, but moved from the east. Therefore moving westward. ESV, KJB, ASV they all agree.
ScareBear wrote on 13 October, 2018 - 08:14
I’m not really certain Etemenanki was the original tower of Babel. I’m pretty sure Babylon was off and on for about 10,000 years and established several capitals, sometimes simultaneous capitals across Asia Minor and Egypt. This Etemenanki was probably a later construct and given the title Ziggurat Babel because of its size. This would actually more or less conclude the legend of the Tower of Babel was known by its later allusion of recreation. I’ve also deduced there was a mother tongue spoken by everyone and rapidly evolved as people scattered but civilizations evidence are very hard to preserve over a span of 10,000 years. The whole world was probably extremely different and had different resources with entirely properties, like what is now sand could’ve been entirely crystalline and eventually crumbled after constant recycling of materials. I’m a pretty big recycler myself and I’m pretty sure if you take a piece of wood and recycle it it’s no longer a whole piece of wood, it’s a pressboard of sawdust, probably the same with babel bricks. Even the scientists visited this Etemenanki and deduced that most of the bricks were recast and reused for other buildings
Denise B wrote on 6 August, 2017 - 11:22
I dispute this time frame for the Tower of Babel. This documentary doesn't take in the time frame of the Bible.
The tower of Babel was around 2000BC not 500-600BC.
The prophet Jeremiah was at the 600BC time frame.
Moses was after the Tower of Babel.
John Collins wrote on 13 May, 2017 - 01:12
Might it have been abandoned at the time of the Early Bronze Age II collapse approximately 2181 BCE in which Mesopotamia was devastated?
(4 Comments)
Mark
Mark Miller has a Bachelor of Arts in journalism and is a former newspaper and magazine writer and copy editor who's long been interested in anthropology, mythology and ancient history. His hobbies are writing and drawing.
By subscribing to Ancient-Orgins.net, you agree to receive email communications from us, as well as the Third Party services we use to manage our email campaigns, including Activecampaign, Substack, and Jeeng
Dating back 11,000 years - with a coded message left by ancient man from the Mesolithic Age - the Shigir Idol is almost three times as old as the Egyptian pyramids. New scientific findings suggest that images and hieroglyphics on the wooden statue were carved with the jaw of a beaver, its teeth intact.
As Greek mythology goes, the universe was once a big soup of nothingness. Then, two things happened: either Chaos or Gaia created the universe as we know it, or Ouranos and Tethys gave birth to the first beings.
Ancient Technology
Dating back 11,000 years - with a coded message left by ancient man from the Mesolithic Age - the Shigir Idol is almost three times as old as the Egyptian pyramids. New scientific findings suggest that images and hieroglyphics on the wooden statue were carved with the jaw of a beaver, its teeth intact.
Our Mission
At Ancient Origins, we believe that one of the most important fields of knowledge we can pursue as human beings is our beginnings. And while some people may seem content with the story as it stands, our view is that there exist countless mysteries, scientific anomalies and surprising artifacts that have yet to be discovered and explained.
The goal of Ancient Origins is to highlight recent archaeological discoveries, peer-reviewed academic research and evidence, as well as offering alternative viewpoints and explanations of science, archaeology, mythology, religion and history around the globe.
We’re the only Pop Archaeology site combining scientific research with out-of-the-box perspectives.
By bringing together top experts and authors, this archaeology website explores lost civilizations, examines sacred writings, tours ancient places, investigates ancient discoveries and questions mysterious happenings. Our open community is dedicated to digging into the origins of our species on planet earth, and question wherever the discoveries might take us. We seek to retell the story of our beginnings. | Suggested Books
PARTNERS
Ancient Babylonian Tablet Provides Compelling Evidence that the Tower of Babel DID Exist
Half the world seems to say the Bible is pure bunk, while the other half says it’s, well, the word of God. Now comes a professor who isn’t religious to say that a baked tablet from ancient Babylon gives evidence that the biblical tower of Babel was real. And his evidence is quite persuasive.
In linguistics, there is a theory that there was a single, original language spoken by humankind. The Bible’s book of Genesis, Chapter 11, hews to that line too, in the passage about the tower of Babel.
Now the whole world had one language and a common speech. As people moved eastward, they found a plain in Shinar [Babylonia] and settled there.
The people decided to build a tower to the heavens to make a name for themselves and avoid being scattered around the world. But the Lord observed this tower’s construction and thought if his people could build this with one language, they could do anything. God decided to prevent them by scattering them around the world and imposing many languages on them.
No doubt the Bible story is quite different from the linguistic theory.
But as for the tower, Andrew George, a professor of Babylonia at the University of London, thinks he has found solid evidence for it in an ancient baked tablet from the city of Babylon.
In a video on Smithsonian.org, he details his theory, and it all sounds very plausible:
The baked clay tablet that Dr. George examined, discovered a century ago in Babylon (now modern-day Iraq) and now privately held, shows what the ziggurat looked like, with its seven steps. It shows the king with his conical hat and staff. And below is text that describes the commissioning of the tower’s construction.
“This is a very strong piece of evidence that the tower of Babel story was inspired by this real building,” Dr. George told Smithsonian. “At the top … there is a relief depicting a step tower and … a figure of a human being carrying a staff with a conical hat on. | yes |
Ancient Civilizations | Was the Tower of Babel a real construction? | yes_statement | the "tower" of babel was a "real" "construction".. the "tower" of babel actually existed. | https://answersingenesis.org/tower-of-babel/ | Tower of Babel | Answers in Genesis | Tower of Babel
The tower of Babel (2242–2206 BC) was a post-flood rebellion against God by Noah’s descendants. Though the Babel account is related in a mere nine verses (Genesis 11:1–9), the resulting judgment of this rebellion accounts for the variety of languages and people groups seen in our world today.
The Biblical Account
When God blessed Noah and his sons after the global flood, he told them to “be fruitful and multiply and fill the earth” (Genesis 9:1). But only about a century later, we see that man seems to have no interest in obeying the command to fill the earth.
Now the whole earth had one language and the same words. And as people migrated from the east, they found a plain in the land of Shinar and settled there. And they said to one another, “Come, let us make bricks, and burn them thoroughly.” And they had brick for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city and a tower with its top in the heavens, and let us make a name for ourselves, lest we be dispersed over the face of the whole earth.” (Genesis 11:1–4)
Fueled by pride, the people preferred to “make a name” for themselves and build a city with a high tower, enabling them to remain together in defiance of God’s command. The proposed construction began. Composed of brick and mortar, this city was intended to be permanent and impressive—a fortress against any natural or supernatural attempt to disperse mankind throughout the earth.
But God was neither unaware of their actions nor powerless against their plans. In his mercy, he intervened—not by destruction as he had during the flood, nor by directly driving them out to be fugitives and wanderers (as in the record of Cain’s judgment; see Genesis 4:12). Instead, God divided their single language into multiple language families.
And the Lord came down to see the city and the tower, which the children of man had built. And the Lord said, “Behold, they are one people, and they have all one language, and this is only the beginning of what they will do. And nothing that they propose to do will now be impossible for them. Come, let us go down and there confuse their language, so that they may not understand one another’s speech.” (Genesis 11:5–7)
For the first time in earth’s history, there was a language barrier. Without a common language, the people who had been so adamant about staying together were now unable to even understand each other. Construction of the city ceased—whether because they lost interest in the city due to the futility of attempting to coordinate such a massive project without a means of communication (not to mention losing the appeal of living together as one people) or because they recognized God’s judgment and feared a worse sentence should they attempt to continue in their rebellion.
Whatever the case, God’s judgment was effective. The attempted “one-world kingdom” fractured. Smaller groups formed from those sharing each of the new languages, and people began scattering from the city.
So the Lord dispersed them from there over the face of all the earth, and they left off building the city. Therefore its name was called Babel, because there the Lord confused the language of all the earth. And from there the Lord dispersed them over the face of all the earth. (Genesis 11:8–9)
Why Does It Matter?
Besides being a fascinating part of mankind’s history, the biblical account of the events at Babel also answers some of the questions and problems of our day. It especially challenges certain evolutionary ideas and provides a reasonable explanation for the diversity in languages and people groups seen today.
Diversity of Languages
Perhaps the most obvious area explained by Babel is the origin of the various languages present in our world. While an evolutionary worldview might expect all languages to trace back to a single parent language (much like it claims all life traces back to one organism), that isn’t what researchers have found. Instead, language families of today trace back to multiple unrelated parent languages—exactly what one would predict from the Babel account.
What Basis for Racism?
The evolutionary story of man’s origins is inherently racist (because of its implications that some people groups are more evolutionarily advanced than others and, therefore, that the “lower” groups are more closely related to primates). However, the Bible’s account of the events at Babel confirms that all people are descended from the groups split at Babel (who were all direct, recent descendants of Noah and, consequently, from the first humans—Adam and Eve—who were special, direct creations of God). There is no basis for racism as all people are related and comprise only one “race” of people made in the image of God. Therefore, they are all equal and all equally human.
Physical Differences Between People Groups
But why, then, do people from various parts of the world look so different from each other? The explanation for differences in physical appearance is simple. As the groups spread out and separated from each other after Babel, their gene pools were largely isolated—generally more so, the farther apart they were—and physical features such as certain skin shades or eye shapes gradually became dominant within each group. These distinctive features are still reflected in the diversity among people groups today. But far from evidence of evolution, these minor genetic differences (and they are minor—making up only a very small percentage of an individual’s DNA) are the natural result of the loss of genetic variability that occurs when people groups are isolated from one another.
For more information related to the tower of Babel, check out these articles:
There exist a great many confirmations of the Bible’s account of the tower of Babel and what happened as a result. Language changes, ziggurats, names of Noah found throughout the world, and tower legends are excellent confirmations of the events at Babel.
God’s gentleness in judging the rebels at Babel is a lesson for us today. By changing one language into many, he separated nations more effectively than any Great Wall of China. God stepped in to prevent the human race from falling under the sway of a single, absolute tyrant over all the earth.
In their efforts to explain the multitude of languages, secular theories come up empty. They are upstaged by the biblical narrative, which credits God with the gift of language and the vast diversity of different language families. The words language, tongue, speech, and word appear in the Bible at least 1,401 times.
“Why do intellectually superior humans have around 7,000 distinct languages?” queries evolutionary biologist Mark Pagel. Pagel heads a team searching for an evolutionary explanation for our many languages. The biblical history of the dispersion from the tower of Babel indicates that diversity of language emerged from “a plain in the land of Shinar.” | Tower of Babel
The tower of Babel (2242–2206 BC) was a post-flood rebellion against God by Noah’s descendants. Though the Babel account is related in a mere nine verses (Genesis 11:1–9), the resulting judgment of this rebellion accounts for the variety of languages and people groups seen in our world today.
The Biblical Account
When God blessed Noah and his sons after the global flood, he told them to “be fruitful and multiply and fill the earth” (Genesis 9:1). But only about a century later, we see that man seems to have no interest in obeying the command to fill the earth.
Now the whole earth had one language and the same words. And as people migrated from the east, they found a plain in the land of Shinar and settled there. And they said to one another, “Come, let us make bricks, and burn them thoroughly.” And they had brick for stone, and bitumen for mortar. Then they said, “Come, let us build ourselves a city and a tower with its top in the heavens, and let us make a name for ourselves, lest we be dispersed over the face of the whole earth.” (Genesis 11:1–4)
Fueled by pride, the people preferred to “make a name” for themselves and build a city with a high tower, enabling them to remain together in defiance of God’s command. The proposed construction began. Composed of brick and mortar, this city was intended to be permanent and impressive—a fortress against any natural or supernatural attempt to disperse mankind throughout the earth.
But God was neither unaware of their actions nor powerless against their plans. In his mercy, he intervened—not by destruction as he had during the flood, nor by directly driving them out to be fugitives and wanderers (as in the record of Cain’s judgment; see Genesis 4:12). Instead, God divided their single language into multiple language families.
And the Lord came down to see the city and the tower, which the children of man had built. | yes |
Ancient Civilizations | Was the Tower of Babel a real construction? | yes_statement | the "tower" of babel was a "real" "construction".. the "tower" of babel actually existed. | https://www.internationalfishers.com/blog/did-the-tower-of-babel-really-exist | Did the Tower of Babel Really Exist? | The Chosu nation was a Korean kingdom that existed between 1392 and 1897. It was isolated from Europeans, and therefore probably had little or no contact with Christianity or Judaism. In 1653, a group of sailors from the Netherlands were sailing to Japan, when they were shipwrecked on Jeju Island, off the coast of South Korea. Thirty-six Dutch sailors survived the sinking of their ship, and were taken as prisoners from the Island of Jeju to the capital city of Seoul, South Korea. They spent 12 years in Korea, during which time they learned the Korean language.
In 1666, eight of the surviving prisoners were able to escape to Japan. One of those survivors, named Hendrik Hamel, spent a year in Nagasaki, Japan writing about his experiences in Korea. In a book, later translated into English and titled “Hamel’s Journal,” he wrote about the beliefs of the Confucian monks. He wrote, “Many monks believe that long ago all people spoke the same language, but when people built a tower in order to climb into heaven the whole world changed.”
How did the Confucian monks come to this belief about a tower and one language for all people? It is unlikely that they ever encountered a Christian, Jewish person, or the Bible at all. It could be that this belief is a coincidence, but if we compare it to the Biblical account of the Tower of Babel, it seems very unlikely that this happened by chance.
Genesis 11:1-9 says, “Now the whole world had one language and a common speech.As people moved eastward, they found a plain in Shinar and settled there. They said to each other, “Come, let’s make bricks and bake them thoroughly.” They used brick instead of stone, and tar for mortar. Then they said, “Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves; otherwise we will be scattered over the face of the whole earth.” But the Lord came down to see the city and the tower the people were building. The Lord said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other.” So the Lord scattered them from there over all the earth, and they stopped building the city. That is why it was called Babel—because there the Lord confused the language of the whole world. From there the Lord scattered them over the face of the whole earth.”
Now, let’s compare the beliefs of the Korean Monks to this passage in Genesis. The book of Genesis was written in the vicinity of the nation of Israel between 1450 and 1410 BC. The Korean Monks held their belief about one language and the tower in 1660. This means that these accounts are separated by more than 3,100 years and 8,065 kilometers.
According to the Korean Monks:
The entire world’s population spoke a single language.
The people constructed a tower.
Their goal was to climb to heaven.
Their efforts affected the entire world.
According to Genesis 11.
11:1 “Now the whole world had one language and a common speech.”
11:4 “Then they said, “Come, let us build ourselves a city, with a tower …“
“Then they said, “Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves; otherwise we will be scattered over the face of the whole earth.”
11:8-9 “So the Lord scattered them from there over all the earth, and they stopped building the city. That is why it was called Babel—because there the Lord confused the language of the whole world. From there the Lord scattered them over the face of the whole earth.”
This comparison is just one evidence that the Tower of Babel really existed in history, just as the Bible teaches.
Here is another evidence to consider, the fact that there are more than 6,500 spoken languages in the world today. Some languages have gone extinct in the past. Non-Christian scientists have confirmed that all male humans living to day are the descendants of one man, called Y-Chromosomal Adam. They have also concluded that all humans both male and female are descendants of one female, called Mitochondrial Eve. Even though the scientists have not yet confirmed that Y-Chromosomal Adam and Mitochondrial-Eve married have children together, their findings help to confirm, rather than deny the Biblical history of Adam and Eve. The Bible teaches that almost all of Adam and Eve’s descendants were killed during the global flood of Noah recorded in Genesis chapters 6-8. Only 8 people survived that flood: Noah and his wife. Noah’s 3 sons and their 3 wives. The account of the Tower of Babel gives a logical explanation for why there are more than 6,500 languages in the world today.
Many people who reject the Tower of Babel as a myth don’t recognize the fact that if humans were not created but only evolved from primates, that humans would have all spoke one language at least at the beginning of the human species. But they also have difficulty coming up with an explanation for why the languages separated into 6,500 different languages that results in most humans in the world not being able to communicate with each other at all. Language barriers make it more difficult for people to trade with each other and also language barriers make it more likely that wars between different languages will occur. So, there really isn’t a strong reason for so many languages to exist at all, according to the theory of evolution.
God separated the languages at the Tower of Babel because the people did not want to honor and obey God. According to Genesis 11:4 the people wanted to “make a name for ourselves; otherwise we will be scattered over the face of the whole earth.” This was in direct disobedience to God’s command in Genesis 1:28 “God blessed them and said to them, “Be fruitful and increase in number; fill the earth and subdue it.”
God also separated the languages to prevent all the people from becoming completely evil as they had become before the Flood of Noah. We can read about this wickedness of all the people in Genesis 6:5-8. Remember that the whole human race probably spoke one language before the flood too. Genesis 6:5-8 says, “The Lord saw how great the wickedness of the human race had become on the earth, and that every inclination of the thoughts of the human heart was only evil all the time. The Lord regretted that he had made human beings on the earth, and his heart was deeply troubled. So the Lord said, “I will wipe from the face of the earth the human race I have created—and with them the animals, the birds and the creatures that move along the ground—for I regret that I have made them.” But Noah found favor in the eyes of the Lord.” God separated the languages to avoid either having to destroy humans again for their complete sinfulness or allowing humans to become so evil and technologically advanced that we humans would destroy ourselves. Sometimes we think that high technology will save humanity, and some of it like medicines and better farming equipment is very good. However as helpful technology increases, harmful technology in the form of powerful weapons also increases as well. Because of the Tower of Babel, the growth of technology was slowed so that it has taken thousands of years for extremely destructive weapons to be made.
There is also other physical evidence for the Tower of Babel that has also recently been discovered by non-Christian researchers. A tablet from 600 BC was discovered in Babylon, Iraq 100 years ago. Experts could not read it until recently. Dr. Andrew George of the University of London was able to translate it recently and discovered that the writing on the tablet was about the construction of a tower that was identical to the Tower of Babel in Genesis. Bodie Hodge, a researcher with the Christian ministry called Answers in Genesis also agrees that the tablet is talking about the Tower of Babel in Genesis.
Some may ask, if the Tower of Babel was real, why haven’t archaeologists found it yet? Here are some possible explanations.
Most archaeologists assumed that the Tower of Babel was just a myth so they did not try to look for it.
According to the Bible, the Tower of Babel was never fully constructed. God may have confused the languages and stopped the construction project before it was big enough to last for thousands of years.
Sometimes people take the bricks from one abandoned building and use them to build houses or other materials.
The Tower of Babel shows us that God separated the languages to prevent human sin and self-destruction. However the story does not end there. Sin may have separated the languages and people groups but God sent his one and only Son, Jesus Christ, to save us from our sins and to bring us back together in love and unity.
John 3:16 says, “For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life.”
In Mark 16:15 Jesus said, “Go into all the world and preach the gospel to all creation.”
One day, those of us who have received the gospel and put their faith in Jesus Christ as their Lord and Savior will get to be part of this wonderful experience in Heaven. Revelation 7:9-10 says, “After this I looked, and there before me was a great multitude that no one could count, from every nation, tribe, people and language, standing before the throne and before the Lamb. They were wearing white robes and were holding palm branches in their hands.And they cried out in a loud voice: “Salvation belongs to our God, who sits on the throne, and to the Lamb.”
Because of Jesus Christ, sin will no longer separates us into different languages and we can also have peace and unity with God. This is good news.
Leave a Reply.
Darryl Record
A Christian Apologist, Author, Missionary, Husband, and Father. Darryl has an MA in Christian Apologetics from Biola University (CA), an MA in TESOL from Azusa Pacific University (CA), and a BA in Political Science from Truman State University (MO). | 100 years ago. Experts could not read it until recently. Dr. Andrew George of the University of London was able to translate it recently and discovered that the writing on the tablet was about the construction of a tower that was identical to the Tower of Babel in Genesis. Bodie Hodge, a researcher with the Christian ministry called Answers in Genesis also agrees that the tablet is talking about the Tower of Babel in Genesis.
Some may ask, if the Tower of Babel was real, why haven’t archaeologists found it yet? Here are some possible explanations.
Most archaeologists assumed that the Tower of Babel was just a myth so they did not try to look for it.
According to the Bible, the Tower of Babel was never fully constructed. God may have confused the languages and stopped the construction project before it was big enough to last for thousands of years.
Sometimes people take the bricks from one abandoned building and use them to build houses or other materials.
The Tower of Babel shows us that God separated the languages to prevent human sin and self-destruction. However the story does not end there. Sin may have separated the languages and people groups but God sent his one and only Son, Jesus Christ, to save us from our sins and to bring us back together in love and unity.
John 3:16 says, “For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life.”
In Mark 16:15 Jesus said, “Go into all the world and preach the gospel to all creation.”
One day, those of us who have received the gospel and put their faith in Jesus Christ as their Lord and Savior will get to be part of this wonderful experience in Heaven. Revelation 7:9-10 says, “After this I looked, and there before me was a great multitude that no one could count, from every nation, tribe, people and language, standing before the throne and before the Lamb. They were wearing white robes and were holding palm branches in their hands. | yes |
Ancient Civilizations | Was the Tower of Babel a real construction? | yes_statement | the "tower" of babel was a "real" "construction".. the "tower" of babel actually existed. | http://worldwidegadget.blogspot.com/2017/05/the-tower-of-babel-was-real-stone.html | The Tower of Babel was real. Stone ... - Worldwide Tech & Science | Friday, May 12, 2017
The Tower of Babel was real. Stone tablet is a solid proof. Video.
Biblical scholars have long debated whether the Tower of Babel really existed. Now, a remarkable stone tablet never before shown on film appears to settle that question.
The first episode of a new series of "secrets" of the Smithsonian channel is drawing attention to an ancient stone tablet that is presented as evidence that the Biblical Tower of Babel actually existed in antiquity, says a note from the Spanish portal ABC .
In the video below, Andrew George, Professor of History of Babylon at the University of London, examines an ancient tablet of the sixth century BC. Found in Babylon more than a century ago, but which had not been studied until now. It represented a staggered structure with seven floors and a human figure with a scepter, which the expert identifies as King Nebuchadnezzar II, the most famous ruler of Mesopotamia and an inscription: "Etemenanki, Ziggurat Babel". That is, the "Tower of the Temple of Babylon".
In his opinion, this tablet, which belongs to the private collection of the Norwegian businessman Martin Schøyen and is shown for the first time in the recording of the Smithsonian, is a solid proof of the existence of the Tower of Babel.
In another inscription on the stone one reads that for the construction of this ziggurat of Babylon, numerous towns of their settlements were mobilized, "" from the upper sea ", that is the Mediterranean," to the smaller sea ", that is the Gulf Persian ».
"The myth of the multitude of tongues comes from the context described in the wake of the multitude of villages enlisted in the construction of the tower," the professor tells Breaking Israel News. "Many languages would be spoken in the play. From there can the idea of the Bible come from the confusion of tongues, "he continues.
"As a asiriologo, I do not deal in the Bible, and I am not a religious person, but in this case, I can say that it is a real building that seems to be the inspiration for the biblical story," admits George in the interview.
The professor recalls that "in the nineteenth century it was discovered that the Assyrian kings mentioned in the Bible were real and were corroborated by the archaeological evidence, making us wonder, in turn, how much more is true in the Bible?"
There is consensus among historians that Nebuchadnezzar II ordered a ziggurat to be built in Babylon after rebuilding the city and converting it into its capital. The site of the tower is located in an area known today as Al Qasr, south of Baghdad.
The ziggurat of Nebuchadnezzar, which archaeologists refer to as Etemenanki, must have had seven floors that reached a height of 91 meters, with a temple of Marduk at its peak. | Friday, May 12, 2017
The Tower of Babel was real. Stone tablet is a solid proof. Video.
Biblical scholars have long debated whether the Tower of Babel really existed. Now, a remarkable stone tablet never before shown on film appears to settle that question.
The first episode of a new series of "secrets" of the Smithsonian channel is drawing attention to an ancient stone tablet that is presented as evidence that the Biblical Tower of Babel actually existed in antiquity, says a note from the Spanish portal ABC .
In the video below, Andrew George, Professor of History of Babylon at the University of London, examines an ancient tablet of the sixth century BC. Found in Babylon more than a century ago, but which had not been studied until now. It represented a staggered structure with seven floors and a human figure with a scepter, which the expert identifies as King Nebuchadnezzar II, the most famous ruler of Mesopotamia and an inscription: "Etemenanki, Ziggurat Babel". That is, the "Tower of the Temple of Babylon".
In his opinion, this tablet, which belongs to the private collection of the Norwegian businessman Martin Schøyen and is shown for the first time in the recording of the Smithsonian, is a solid proof of the existence of the Tower of Babel.
In another inscription on the stone one reads that for the construction of this ziggurat of Babylon, numerous towns of their settlements were mobilized, "" from the upper sea ", that is the Mediterranean," to the smaller sea ", that is the Gulf Persian ».
"The myth of the multitude of tongues comes from the context described in the wake of the multitude of villages enlisted in the construction of the tower," the professor tells Breaking Israel News. "Many languages would be spoken in the play. From there can the idea of the Bible come from the confusion of tongues, "he continues.
| yes |
Ancient Civilizations | Was the Tower of Babel a real construction? | yes_statement | the "tower" of babel was a "real" "construction".. the "tower" of babel actually existed. | https://newcreation.blog/the-tower-of-babel/ | The Tower of Babel • New Creation Blog | The Tower of Babel
Share
Genesis 11:1–9 speaks of a time when all the people on the earth spoke the same language. As people moved westward, presumably from the Ark’s landing site, they came to a plain in the land of Shinar. There, they decided to build a city and a tower that would rise into the heavens. Their goal was to make a name for themselves so they wouldn’t be scattered over the face of the earth. This was a direct contradiction to God’s command to be fruitful and multiply and fill the earth.
The passage goes on to say that the Lord came down and, seeing the city and its tower, stated that these people would be unstoppable if they all had the same language. For this reason, He confused their language so that they couldn’t understand each other. This forced them to scatter across the globe – the very thing that they were trying to avoid.
It is possible to compare specific details in the biblical account of the Tower with historical and archaeological research. This article will analyze the historical and archaeological data related to the Tower of Babel. Its goal is to examine the historicity of the Tower, as well as the structure type, location, and timeframe. It will also address the confusion of the languages.
Was the Tower of Babel Real?
The Bible presents the Tower of Babel fiasco as a real historical event. For those who believe in the Bible as an accurate historical document, that is enough to prove that it really happened. Others, however, view the scriptures more critically. For example, Frederick (2012) suggests that the Tower of Babel is a mythological story. He thinks the biblical writer invented it to help explain the origins of the Hebrew people.
Did Moses Write Genesis?
Moses is the traditional author of Genesis. Both the Old Testament (eg. 2 Chronicles 34:14) and the New Testament (eg. Mark 12:26) cite Moses as the author of the Book of the Law, which likely refers to the first five books of the Bible. He lived hundreds, and possibly over 1,000, years after the Tower of Babel event. If Moses recorded a true account of a real event, either the story passed down from generation to generation until it reached Moses or God inspired Moses to write it without receiving any human knowledge of the event.
Did Someone Else Write Genesis?
Another school of thought places the authorship of Genesis much later, during or after the Babylonian exile in the sixth century BC. This is a popular theory among biblical scholars, but it ignores some key biblical and archaeological data. This theory widens the gap between the Tower of Babel and the writing of the book of Genesis. Thus, if the author recorded a true event, it had first passed down through many, many generations. This lessens the probability of it containing an accurate record of events. However, if the events recorded in Genesis prove to be archaeological accurate, as I hope to prove below, this late-date authorship theory becomes even less plausible.
Making Sense of It All
Regardless of the author, it is certain that Genesis was written long after the Tower of Babel event occurred. Moses, who grew up in Egypt, would not have been familiar with Mesopotamian culture. On the other hand, if an exiled Jew wrote Genesis, he or she would have been familiar with Babylon; due to the passage of time, it is unlikely the author could produce accurate details about such an early event.
It is certainly acceptable to continue believing the truth of the Bible even in the face of criticism. However, it can also be helpful to examine the facts in order to see how the evidence lines up with the biblical text. The remainder of this article will analyze the archaeological and historical data to determine whether there is evidence that the Tower of Babel event occurred as described in Genesis 11.
What Was the Tower of Babel?
The Type of Structure
A Ziggurat
There is wide agreement among scholars that the Tower of Babel was a ziggurat (Garrett 2010, 20; Walden 2008; Taylor 1973, 135; Price 2017, 70–71). Ziggurats were ancient Mesopotamian temple-towers. They were constructed in a stair-step fashion, with each tier smaller than the one below it. At the top lay a small shrine dedicated to a particular deity. The earliest known ziggurats date to around 3200 BC (Walden 2008).
Linguistic studies have demonstrated that the purpose of ziggurats was to create a link between earth and heaven. The names of some known ziggurats illustrate this concept. A ziggurat at Larsa had the name “The House of the Link between Heaven and Earth,” while one at Borsippa bore the title “The House of the Seven Guides of Heaven and Earth.” The title of another ziggurat in Babylon was “The House of the Foundation-Platform of Heaven and Earth,” and the one at Sippar was called “The Temple of the Stairway to Pure Heaven” (Garrett 2010, 20; Baizerman 2015).
The Building Materials
Genesis 11:3 specifies that the Tower of Babel’s construction consisted of baked brick and mortar. With their first known use dating to the late Uruk period, baked bricks were the prime building material for the construction of ziggurats. Bitumen, a sticky, black, petroleum-based substance, was the mortar of choice (Walton 2008). Not all ancient cultures used baked bricks. For example, the Egyptians constructed the pyramids of sun-dried bricks.
Where Was the Tower of Babel?
According to Genesis 11:2, the Tower of Babel lay in the plain of Shinar. Many scholars conclude that Shumer is the Akkadian equivalent of the Hebrew word Shinar (Habermehl 2011, 29). Therefore, they equate Shinar with the region of Shumer in southern Mesopotamia. Even those who do not equate Shumar with Shinar generally place the location of the Tower in that region (Zadok 1984, 240).
Was it in Babylon?
Scholars have traditionally equated Babel with Babylon, an ancient Messopotamian city that lies on the Euphrates river (Ross 1981, 122; Price 2017, 70). Some early researchers, looking for ancient towers in the area of Babylon, believed the site of Birs Numrud was the Tower of Babel. A later view suggested instead that an ancient structure known as E-temen-an-ki was the Tower of Babel (Kraeling 1920, 276).
Candidate Sites in Babylon
E-temen-an-ki was an impressive 7-story tower, and became one of the wonders of the ancient world. However, this tower, which dates to the 7th century BC at the earliest, is not nearly old enough to be the Tower of Babel (Ross 1981, 123).
According to an inscription found at Birs Numrud, Nebuchadnezzar, a king of Babylon in the 6th century BC, extensively renovated and restored the ancient tower (Rawlinson 1861, 27–32). He made it into a shining, multi-layered tower and used gold, silver, and precious stones in the construction. Nebuchadnezzar recorded the tower was ancient when he restored it, but he left no record of how old it was at that time (Peters 1921, 158). The extensive renovation appears to have destroyed the earlier remains necessary to determine the age of the original structure.
There is some debate among scholars as to whether Nebuchadnezzar restored Birs Numrud or E-temen-an-ki (Peters 1921, 158; Price 2017, 72). It is possible that he restored both.
Regardless of the actual age of these towers and whether either one qualifies as a candidate for the Tower of Babel, it appears that Nebuchadnezzar believed that he had restored and completed the actual Tower of Babel. He boasted, “I made it the wonder of the people of the world, I raised its top to the heaven” (Price 2017, 72).
Difficulties Dating the Babylon Towers
Although scholars have considered both of these towers as candidates for the Tower of Babel, it is difficult to determine whether either one is old enough to qualify. In fact, there is some question regarding the age of Babylon itself.Since the water table of the Euphrates River shifted over time, it obliterated early archaeological remains (Walton 1995, 174).
Was it Somewhere Else?
Recently, several scholars have questioned the traditional location of Babylon as Babel. Habermehl (2011, 46) suggests that Babel lay in northern Syria, but she does not suggest a candidate site.
The ruins of the ziggurat base at Eridu
Conversely, Petrovich (personal communication with the author) suggests that Eridu, the oldest known city in the world, was the site of the Tower. He notes that the name Babylon belonged to various cities throughout this history of Mesopotamia. Eridu was one of these cities.
Petrovich searched the archaeological record for evidence of a dispersion of people from the region of Shumer. He discovered that there was a movement of people away from Eridu during the Late Uruk III period.
During this time the people of Eridu constructed a giant raised platform, presumably intended as the base for a large temple-tower. For unknown reasons, they abandoned the construction project, which coincided with a dispersion of people in various directions. Some built new communities, while others moved into existing settlements, sometimes slaughtering the existing residents. The Uruk expansion was a violent movement of people groups, suggesting a lack of harmony among them. This makes sense if they could no longer communicate with each other.
When Was the Dispersion from Babel?
Where Does It Fit in Biblical History?
Genesis 10:25 notes that the earth was divided in the days of Peleg. The majority of creationists believe that this refers to the division of people at the Tower of Babel (Habermehl 2013). Since Genesis 11:10–27 provides a genealogy including ages, it should be easy to calculate the amount of time between the Flood and the Tower of Babel.
However, a complication arises in the fact that the Masoretic text and the Septuagint, the two main ancient biblical texts, contain conflicting numbers. Adding up the figures in the Masoretic text provides a total of 101 years between the Flood and the Tower of Babel event, assuming it occurred close to Peleg’s birth. The same timespan in the Septuagint adds up to 531 years. For more on these chronologies, see this article.
Where Does It Fit in World History?
Another point of consideration is how the Tower of Babel fits into world history. Many biblical scholars assume that Noah’s descendents remained as a unified group until the Tower of Babel event. This event forced them to adopt a ‘Stone Age’ lifestyle as they spread out across the earth. If this is the case, the Tower of Babel would pre-date almost all settlements and civilizations worldwide.
However, some scholars have suggested that the Tower of Babel event did not include all of Noah’s descendents. The Bible does not specify that the people remained in one group until that point, only that they had one language (Genesis 10:32–11:2).
Walton (2008) examined the technology required for the Tower of Babel. He suggested that four key components were necessary. These include baked bricks, ziggurats, urbanization, and central government. Walton suggested that while these four criteria were necessary, the Tower of Babel could have existed before these technologies successfully emerged. This is because the confusion of the languages created an effective set-back in technological advancement. The technologies would have successfully re-emerged at a later date. Walton concluded that the Tower of Babel event occurred in the Ubaid period or perhaps the Uruk period.
Petrovich (personal communication with the author) believes that the Tower of Babel involved only a portion of the earth’s population and occured after the development of ‘Stone Age’ technology. As noted above, he dated the event to the Late Uruk III period. At this point in history, humanity was in the process of redeveloping technology lost at the time of the Flood. They transitioned from hunter-gatherer lifestyle to a more agricultural lifestyle, and finally urbanized living. This time period meshes well with the Septuagint date for the Tower of Babel.
What About the Confusion of the Languages?
Linguistic studies can shed light on the Tower of Babel event. Interestingly, it appears that languages have not evolved from one single root language. Rather, multiple languages seemed to have appeared, fully developed, very early in history (Curtis 1990). This lack of language development is unexplainable without a supernatural event such as the Tower of Babel.
Was Sumerian the First Language?
Additionally, research suggests that the Sumerian people were the first major people group to settle in southern Mesopotamia. Oddly, the Sumerian language included some words, particularly in regards to agriculture, that seem to come from a different language. Typically, this occurs when one group of people conquers another and takes over an area. Yet, in this case, there is no evidence of a cultural shift (Aling 2004).
Aling (2004) suggests that this is evidence of the confusion of the languages at the Tower of Babel. After God confused the languages, the people who remained in southern Mesopotamia spoke a different language than before. However, certain words, such as those used in place names, may have survived from the original language. This would explain why the Sumerian language has some words which seem to have been carried over from a prior language even though there is no evidence that people of a different culture lived there before them.
A Sumerian Legend
Depiction of Enlil
There is an ancient Sumerian legend known as Enmerkar and the Lord of Aratta. It tells of a king of Uruk who attempted to build a ziggurat temple in Eridu. He describes the structure as “a temple brought down from heaven.” The story goes on to describe how the god Enlil changed the speech of the people from many languages to one language in order to unify them in worship (Kennedy 2020, 22–23).
This story bears a marked resemblance to Genesis 11:1–9, but the details are reversed. Whereas the Tower of Babel was intended to reach up to heaven, the tower in the legend came down from heaven. In Genesis, God changed the languages from one to many, while in the story, Enlil changed the languages from many to one. Despite these differences, the story of Enmerkar demonstrates that the Sumerian people retained a memory, albeit convoluted, of the Tower of Babel event.
Summary
So, what does the evidence suggest? How do the archaeological findings line up with the biblical account of the Tower of Babel? Did the author of Genesis record actual history? Or, as the critics suggest, is it a fabricated account?
The Construction of the Tower
The nature of the Tower of Babel matches well with ancient Mesopotamian ziggurats, religious towers intended to reach to heaven. The building materials mentioned in Genesis 11:3 are consistent with those used in the construction of ziggurats. It is highly unlikely that Moses or an even later author would have been familiar enough with this type of construction to fabricate such a story. Therefore, these details suggest that the account in Genesis 11 describes a real tower from early Mesopotamian history.
The Location of the Tower
Although scholars have suggested various candidate sites for the Tower of Babel, conclusive evidence seems to be lacking for any of the possible sites in the Babylon area. However, the location of the Tower, as described in Genesis, finds a close parallel in the archaeology of Eridu, the world’s oldest city. Even without conclusive evidence, the similarities between the biblical account and ancient Mesopotamian history and myths strongly suggest that the Tower of Babel account originated in ancient Mesopotamia.
The Timeframe of the Event
There is some disagreement among scholars regarding the timeframe of the Tower of Babel event. However, whether it occurred shortly after the Flood or later after many people groups had already dispersed, it seems clear that the technology necessary for the construction of the tower existed early in Mesopotamian history.
The Confusion of the Languages
Linguistic studies suggest that multiple fully-developed languages appeared early in human history. Furthermore, ancient Sumerian mythology suggests a memory of an event in which a deity changed the languages of the people. Although the details are different from the biblical account, it seems likely that these myths refer to the same event. Thus, linguistics and mythology both point to the historical reality of the Tower of Babel event.
Conclusion
The synchronisms between the biblical account of the Tower of Babel and ancient Mesopotamian archaeology are striking. When compared to archaeological findings from ancient Mesopotamia, the specific details in the biblical account point to its historic reliability. Studies regarding the Tower of Babel are ongoing. Further research and archaeological digs may reveal new data. Until then, even without conclusive evidence regarding the location and timeframe of the Tower of Babel, it seems clear that the narrative is firmly tied to ancient Mesopotamia, just as the Bible states. This suggests that the author of Genesis had a very good understanding of these events and recorded them accurately. It seems very likely that Moses was the author of Genesis since he spoke with God (Numbers 12:6–8) and could have received the details directly from Him. It seems impossible that any author could have fabricated such precise and accurate details hundreds or thousands of years after the event.
Related Tags
Abigail is a PhD student at Ariel University, where she is studying archaeology. She has a Master’s degree in Biblical History and Archaeology from The Bible Seminary. She is the Assistant Dig Director at the Shiloh excavations in Israel and has served as Objects Registrar and Square Supervisor at both Shiloh and Khirbet el Maqatir. She has also worked at the Mount Zion excavation in Jerusalem and at the Temple Mount Sifting Project.
We know for sure that Moses wrote the Pentateuch because there are several references to him being the author throughout scripture. Jesus even refers to the Pentateuch as the Book of Moses in Mark 12:26.
Hi Brittney,
You are right, anyone who takes the Bible at face value (myself included) can see clear evidence for Moses’ authorship of the Pentateuch. Unfortunately, there is a prominent idea among Bible scholars that the Pentateuch was not written until much later. The reason that I brought up this possibility in my article was not to give credence to the idea, but rather to emphasize the fact that the author (whether Moses or anyone later) did not have first-hand knowledge of the Tower of Babel. I will review the article and, if necessary, edit it to clarify this topic. I appreciate you bringing it to my attention.
1
Reply
michael McManus
February 9, 2022 8:30 PM
When I found something that seemed to contradict the bible I would do a little research and sometimes struggle for years over something like the flood story… Then someone puts it in a new light and you can see it line up. Another one is the levirate marriage in the NT that fixed a contradiction of names… Often times there’s some archaeological evidence. At the end of many lessons, you take things on faith that the answer will come… Not blind faith, but experienced faith…
Hi Michael,
That is so true. Thank you for sharing. There have been many instances in which skeptical scholars were convinced that a certain portion of the Bible were incorrect. Their view held sway until decades later, when an archaeological discovery came to light which proved that the Bible had it right all along. For example, the very existence of King David was strongly doubted until the “House of David” inscription came to light.
Dear Abby: Sorry you didn’t mention that the other 2 peoples with literacy in deep antiquity, the Chinese and the Mayans, also have records of an original language being diversified by celestial power. …Also strange how you got to Shumer without equating it to Sumer. Sumerian is an isolate language. They also had a newly spun-off language from Edenic, but they stayed in Shinar…. Peleg was not about the breakup of languages, but about the breakup of Pangea, the 1 יבשת YaBeSHeT, into our present continents. A walk from Shinar to southern Chile was once easy. … Any kid can see that East Brazil fits into West Africa… but many acadummies can’t. The meaning of one S. American language is “pleasant speech.” This infers that all other languages were awful terrors at some distant time. If I can send you only one of our E-Books it could be “Old Words in the New World: Discovering over 6,000 Biblical Hebrew roots in the Native Languages of the Americas.” (friend of David Rubin, ex-mayor of Shiloh)
0
Reply
Fred Berkland
June 2, 2022 5:48 AM
Many believe that the tower utilized technology from before the flood and was made out of materials that we have not yet created. This tower could have been miles high. Nimrod is said to have been developing a weapon similar to the Cern project in this present day. Also that he is the only human to convert himself to the Nephilum level. That is why they cut him up into over 60 pieces and distributed these pieces over the known world. Essentially; they wanted to kill God by having this weapon shoot interdimensional to reach Him. God’s angels led over 70 groups of languages to different locations on the globe. None of this dispersion was accidental or random. Sounds crazy, maybe not! | Despite these differences, the story of Enmerkar demonstrates that the Sumerian people retained a memory, albeit convoluted, of the Tower of Babel event.
Summary
So, what does the evidence suggest? How do the archaeological findings line up with the biblical account of the Tower of Babel? Did the author of Genesis record actual history? Or, as the critics suggest, is it a fabricated account?
The Construction of the Tower
The nature of the Tower of Babel matches well with ancient Mesopotamian ziggurats, religious towers intended to reach to heaven. The building materials mentioned in Genesis 11:3 are consistent with those used in the construction of ziggurats. It is highly unlikely that Moses or an even later author would have been familiar enough with this type of construction to fabricate such a story. Therefore, these details suggest that the account in Genesis 11 describes a real tower from early Mesopotamian history.
The Location of the Tower
Although scholars have suggested various candidate sites for the Tower of Babel, conclusive evidence seems to be lacking for any of the possible sites in the Babylon area. However, the location of the Tower, as described in Genesis, finds a close parallel in the archaeology of Eridu, the world’s oldest city. Even without conclusive evidence, the similarities between the biblical account and ancient Mesopotamian history and myths strongly suggest that the Tower of Babel account originated in ancient Mesopotamia.
The Timeframe of the Event
There is some disagreement among scholars regarding the timeframe of the Tower of Babel event. However, whether it occurred shortly after the Flood or later after many people groups had already dispersed, it seems clear that the technology necessary for the construction of the tower existed early in Mesopotamian history.
The Confusion of the Languages
Linguistic studies suggest that multiple fully-developed languages appeared early in human history. Furthermore, ancient Sumerian mythology suggests a memory of an event in which a deity changed the languages of the people. Although the details are different from the biblical account, it seems likely that these myths refer to the same event. | yes |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://www.history.com/news/who-invented-beer | Who Invented Beer? | HISTORY | When and where did beer first originate? It’s difficult to attribute the invention of beer to a particular culture or time period, but the world’s first fermented beverages likely emerged alongside the development of grain agriculture some 12,000 years ago.
As hunter-gatherer tribes settled into agrarian civilizations based around staple crops like wheat, rice, barley and maize, they may have also stumbled upon the fermentation process and started brewing beer. Some anthropologists have argued that these early peoples’ thirst for a brewed beverage may have contributed to the Neolithic Revolution by inspiring new agricultural technologies.
The Food That Built America
Watch every season of the hit show The Food That Built America. Available to stream now.
The earliest known alcoholic beverage may have been brewed around 7000 BCE in China in the village of Jiahu, where neolithic pottery shows evidence of a mead-type concoction made from rice, honey and fruit.
The first barley beer was most likely born in the Middle East, where hard evidence of beer production dates back about 5,000 years to the Sumerians of ancient Mesopotamia. Not only have archeologists unearthed ceramic vessels from 3400 B.C. still sticky with beer residue, but the “Hymn to Ninkasi”—an 1800 B.C. ode to the Sumerian goddess of beer—describes a recipe for a beloved ancient brew made by female priestesses.
These nutrient-rich suds were a cornerstone of the Sumerian diet and were likely a safer alternative to drinking water from nearby rivers and canals, which were often contaminated by animal waste.
Beer consumption also flourished under the Babylonian Empire, where its ancient set of laws, the Code of Hammurabi decreed a daily beer ration to citizens. The drink was distributed according to social standing: Laborers received two liters a day, while priests and administrators got five. At the time, the drink was always unfiltered, and cloudy, bitter sediment would gather at the bottom of the drinking vessels. Special drinking straws were invented to avoid the muck.
Few ancient cultures loved their beer as much as the ancient Egyptians. Workers along the Nile were often paid with an allotment of a nutritious, sweet brew, and everyone from pharaohs to peasants and even children drank beer as part of their everyday diet. Many of these ancient beers were flavored with unusual additives such as mandrake, dates and olive oil.
More modern-tasting libations would not arrive until the Middle Ages when Christian monks and other artisans began brewing beers seasoned with hops.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate.
Sign up for Inside History
Get HISTORY’s most fascinating stories delivered to your inbox three times a week. | When and where did beer first originate? It’s difficult to attribute the invention of beer to a particular culture or time period, but the world’s first fermented beverages likely emerged alongside the development of grain agriculture some 12,000 years ago.
As hunter-gatherer tribes settled into agrarian civilizations based around staple crops like wheat, rice, barley and maize, they may have also stumbled upon the fermentation process and started brewing beer. Some anthropologists have argued that these early peoples’ thirst for a brewed beverage may have contributed to the Neolithic Revolution by inspiring new agricultural technologies.
The Food That Built America
Watch every season of the hit show The Food That Built America. Available to stream now.
The earliest known alcoholic beverage may have been brewed around 7000 BCE in China in the village of Jiahu, where neolithic pottery shows evidence of a mead-type concoction made from rice, honey and fruit.
The first barley beer was most likely born in the Middle East, where hard evidence of beer production dates back about 5,000 years to the Sumerians of ancient Mesopotamia. Not only have archeologists unearthed ceramic vessels from 3400 B.C. still sticky with beer residue, but the “Hymn to Ninkasi”—an 1800 B.C. ode to the Sumerian goddess of beer—describes a recipe for a beloved ancient brew made by female priestesses.
These nutrient-rich suds were a cornerstone of the Sumerian diet and were likely a safer alternative to drinking water from nearby rivers and canals, which were often contaminated by animal waste.
Beer consumption also flourished under the Babylonian Empire, where its ancient set of laws, the Code of Hammurabi decreed a daily beer ration to citizens. The drink was distributed according to social standing: Laborers received two liters a day, while priests and administrators got five. | no |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://en.wikipedia.org/wiki/History_of_beer | History of beer - Wikipedia | Beer is one of the oldest human-produced drinks. The first chemically confirmed barley-beer – from the area of modern-day Iran – dates back to the 5th millennium BC. The written history of ancient Egypt and Mesopotamia records the use of beer, and the drink has spread throughout the world; a 3,900-year-old Sumerian poem honouring Ninkasi, the patron goddess of brewing, contains the oldest surviving beer-recipe, describing the production of beer from barley bread, and in China, residue on pottery dating from around 5,000 years ago shows that beer was brewed using barley and other grains.[2]
Beer may have been known in Neolithic Europe as far back as 5,000 years ago,[6] and was mainly brewed on a domestic scale.[7][better source needed] Beer produced before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century AD beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant by the end of the 19th century.[8][page needed] The development of hydrometers and thermometers changed brewing by allowing the brewer more control of the process, and giving greater knowledge of the brewing product.
As almost any cereal containing certain sugars can undergo spontaneous fermentation due to wild yeasts in the air, it is possible that beer-like drinks were independently developed throughout the world soon after a tribe or culture had domesticated cereal. Chemical tests of ancient pottery jars reveal that beer was produced about 3,500 BC in what is today Iran, and was one of the first-known biological engineering tasks where the biological process of fermentation is used; the earliest chemically confirmed barley beer to date was discovered at Godin Tepe in the central Zagros Mountains of Iran, where fragments of a jug, from between 5,400 and 5,000 years ago was found to be coated with beerstone, a by-product of the brewing process.[11]
The process by which the production of beer was discovered is a matter of debate.
Author Thomas Sinclair says in his book, "Beer, Bread, and the Seeds of Change: Agriculture's Imprint on World History" that the discovery of beer may have been an accidental find. The precursor to beer was soaking grains in water and making a porridge or gruel, as grain was chewy and hard to digest alone. Ancient peoples would heat the gruel and leave it throughout the days until it was gone. A benefit to heating the gruel would be to sanitize the water and the temperature required to denature grain proteins would also denature disease microbes. Leaving the gruel to sit would change it. Fermentation would occur and they noticed the change in taste and effect. Yeasts would settle on the mixture and rapidly consume the oxygen in the mixture. The low oxygen would then cause the yeast to digest sugars by anaerobic respiration, which causes the release of ethanol (alcohol) and carbon dioxide as by-products and, hence, beer was born.[13]
The first written records of brewing come from Mesopotamia (ancient Iraq), with the oldest in the Sumerian language from approximately 4,000 BC.[16] These include early evidence of beer in the 3,900-year-old Sumerian poem honoring Ninkasi, the patron goddess of brewing, which contains the oldest surviving beer recipe, describing the production of beer from barley via bread.[17]
"Ninkasi, you are the one who pours out the filtered beer of the collector vat... It is [like] the onrush of Tigris and Euphrates."[18]
Approximately 5,000 years ago, workers in the city of Uruk were paid by their employers in beer.[19] Beer is also mentioned in the Epic of Gilgamesh, in which the 'wild man' Enkidu is given beer to drink. "... he ate until he was full, drank seven pitchers of beer, his heart grew light, his face glowed and he sang out with joy."[16]
Confirmed written evidence of ancient beer production in Armenia can be obtained from Xenophon in his work Anabasis (5th century BC) when he was in one of the ancient Armenian villages in which he wrote:
There were stores within of wheat and barley and vegetables, and wine made from barley in great big bowls; the grains of barley malt lay floating in the beverage up to the lip of the vessel, and reeds lay in them, some longer, some shorter, without joints; when you were thirsty you must take one of these into your mouth, and suck. The beverage without admixture of water was very strong, and of a delicious flavour to certain palates, but the taste must be acquired.[25][26]
Beer became vital to all the grain-growing civilizations of Eurasian and North African antiquity, including Egypt – so much so that in 1868 James Death put forward a theory in The Beer of the Bible that the manna from heaven that God gave the Israelites was a bread-based, porridge-like beer called wusa.[27]
These beers were often thick, more of a gruel than a drink, and drinking straws were used by the Sumerians to avoid the bitter solids left over from fermentation. Though beer was drunk in Ancient Rome, it was replaced in popularity by wine.[28]Tacitus wrote disparagingly of the beer brewed by the Germanic peoples of his day. Thracians were also known to consume beer made from rye, even since the 5th century BC, as the ancient Greek logographer Hellanicus of Lesbos says. Their name for beer was brutos, or brytos. The Romans called their brew cerevisia, from the Celtic word for it. Beer was apparently enjoyed by some Roman legionaries. For instance, among the Vindolanda tablets (from Vindolanda in Roman Britain, dated c. 97–103 AD), the cavalry decurion Masculus wrote a letter to prefect Flavius Cerialis inquiring about the exact instructions for his men for the following day. This included a polite request for beer to be sent to the garrison (which had entirely consumed its previous stock of beer).[29]
In ancient Mesopotamia, clay tablets indicate that the majority of brewers were probably women, and that brewing was a fairly well respected occupation during the time, being the only profession in Mesopotamia which derived social sanction and divine protection from female deities/goddesses, specifically: Ninkasi, who covered the production of beer, Siris, who was used in a metonymic way to refer to beer, and Siduri, who covered the enjoyment of beer.[31][32] Mesopotamian brewing appears to have incorporated the usage of a twice-baked barley bread called bappir, which was exclusively used for brewing beer.[33] It was discovered early that reusing the same container for fermenting the mash would produce more reliable results; brewers on the move carried their tubs with them.[34]
The Ebla tablets, discovered in 1974 in Ebla, Syria, show that beer was produced in the city in 2500 BC.[35] Early traces of beer and the brewing process have been found in ancient Babylonia as well. At the time, brewers were women as well, but also priestesses. Some types of beers were used especially in religious ceremonies. In 2100 BC, the Babylonian king Hammurabi included regulations governing tavern keepers in his law code for the kingdom.[36]
Beer was part of the daily diet of Egyptian pharaohs over 5,000 years ago. Then, it was made from baked barley bread, and was also used in religious practices.[40] During the building of the Great Pyramids in Giza, Egypt, each worker got a daily ration of four to five liters of beer, which served as both nutrition and refreshment that was crucial to the pyramids' construction.[41]
The Greek writer Sophocles (450 BC) discussed the concept of moderation when it came to consuming beer in Greek culture, and believed that the best diet for Greeks consisted of bread, meats, various types of vegetables, and beer[citation needed] or "ζῦθος" (zythos) as they called it.[42] The ancient Greeks also made barleywine (Greek: "κρίθινος οἶνος" – krithinos oinos, "barley wine"[43][44]) mentioned by Greek historian Polybius in his work The Histories, where he states that Phaeacians kept barleywine in silver and golden kraters.[45]
During the £1.5bn upgrade of the A14 in Cambridgeshire, evidence was found that beer was brewed in Britain more than 2,000 years ago. Steve Sherlock, the Highways England archaeology lead for the A14 project said, "It’s a well-known fact that ancient populations used the beer-making process to purify water and create a safe source of hydration, but this is potentially the earliest physical evidence of that process taking place in the UK." Roger Protz, the former editor of the Campaign for Real Ale's Good Beer Guide, said, "When the Romans invaded Britain they found the local tribes brewing a type of beer called curmi."[46]
In Europe during the Middle Ages, a brewers' guild might adopt a patron saint of brewing. Arnulf of Metz (c. 582–640) and Arnulf of Oudenburg (c. 1040–1087) were recognized by some French and Flemish brewers.[47] Belgian brewers, too, venerated Arnulf of Oudenburg (aka Arnold of Soissons),[48] who is also recognized as the patron saint of hop-pickers. Christian monks built breweries, to provide food, drink, and shelter to travelers and pilgrims.[40]
Charlemagne, Frankish king and ruler of the Holy Roman Empire during the 8th century, considered beer to be an important part of living, and is often thought to have trained some brewers himself.[36]
Beer was one of the most common drinks during the Middle Ages. It was consumed daily by all social classes in the northern and eastern parts of Europe where grape cultivation was difficult or impossible.[citation needed] Though wine of varying qualities was the most common drink in the south, beer was still popular among the lower classes. The idea that beer was consumed more commonly than water during medieval times is considered by some historians to be a myth.[49] Water was cheaper than beer, and towns/villages were built close to sources of fresh water such as rivers, springs, and wells to facilitate easy access to the resource.[50] Though probably one of the most popular drinks in Europe, beer was frequently disdained as being unhealthy, possibly because ancient Greek and more contemporary Arab physicians had little or no experience with the drink. In 1256, the Aldobrandino of Siena described the nature of beer in the following way:
But from whichever it is made, whether from oats, barley or wheat, it harms the head and the stomach, it causes bad breath and ruins the teeth, it fills the stomach with bad fumes, and as a result anyone who drinks it along with wine becomes drunk quickly; but it does have the property of facilitating urination and makes one's flesh white and smooth.[51]
The use of hops in beer was written of in 822 by the Carolingian Abbot Adalard of Corbie.[52] Flavoring beer with hops was known at least since the 9th century, but was only gradually adopted because of difficulties in establishing the right proportions of ingredients. Before that, gruit, a mix of various herbs, had been used, but did not have the same preserving properties as hops. Beer flavored without it was often spoiled soon after preparation and could not be exported. The only other alternative was to increase the alcohol content, which was rather expensive. Hopped beer was perfected in the medieval towns of Bohemia by the 13th century. German towns pioneered a new scale of operation with standardized barrel sizes that allowed for large-scale export. Previously beer had been brewed at home, but the production was now successfully replaced by medium-sized operations of about eight to ten people. This type of production spread to Holland in the 14th century and later to Flanders and Brabant, and reached England by the late 15th century.[53]
English ale and beer brewing were carried out separately, no brewer being allowed to produce both. The Brewers Company of London stated "no hops, herbs, or other like thing be put into any ale or liquore wherof ale shall be made – but only liquor (water), malt, and yeast." This comment is sometimes misquoted as a prohibition on hopped beer. [54] However, hopped beer was opposed by some:
Ale is made of malte and water; and they the which do put any other thynge to ale than is rehersed, except yest, barme, or goddesgood [three words for yeast], doth sophysticat there ale. Ale for an Englysshe man is a naturall drinke. Ale muste haue these properties, it muste be fresshe and cleare, it muste not be ropy, nor smoky, nor it must haue no wefte nor tayle. Ale shulde not be dronke vnder .v. dayes olde …. Barly malte maketh better ale than Oten malte or any other corne doth … Beere is made of malte, of hoppes, and water; it is a naturall drynke for a doche [Dutch] man, and nowe of late dayes it is moche vsed in Englande to the detryment of many Englysshe men … for the drynke is a colde drynke. Yet it doth make a man fatte, and doth inflate the bely, as it doth appere by the doche mennes faces and belyes.[55]
In Europe, beer brewing largely remained a home activity in medieval times. By the 14th and 15th centuries, beermaking was gradually changing from a family-oriented activity to an artisan one, with pubs and monasteries brewing their own beer for mass consumption.
In the late Middle Ages, the brewing industry in northern Europe changed from a small-scale domestic industry to a large-scale export industry. The key innovation was the introduction of hops, which began in northern Germany in the 13th century. Hops sharply improved both the brewing process and the quality of beer. Other innovations from German lands involved larger kettle sizes and more frequent brewing. Consumption went up, while brewing became more concentrated because it was a capital-intensive industry. Thus in Hamburg per capita consumption increased from an average of 300 liters per year in the 15th century to about 700 in the 17th century.[56]
The use of hops spread to the Netherlands and then to England. In 15th century England, an unhopped beer would have been known as an ale, while the use of hops would make it a beer. Hopped beer was imported to England from the Netherlands as early as 1400 in Winchester, and hops were being planted on the island by 1428. The popularity of hops was at first mixed—the Brewers Company of London went so far as to state "no hops, herbs, or other like thing be put into any ale or liquore wherof ale shall be made—but only liquor (water), malt, and yeast."[This quote needs a citation] However, by the 16th century, ale had come to refer to any strong beer, and all ales and beers were hopped, giving rise to the verse noted by the antiquary John Aubrey:
the year, according to Aubrey, being the fifteenth of Henry VIII (1524).[57]
In 1516, William IV, Duke of Bavaria, adopted the Reinheitsgebot (purity law), perhaps the oldest food regulation still in use through the 20th century (the Reinheitsgebot passed formally from German law in 1987). The Gebot ordered that the ingredients of beer be restricted to water, barley, and hops; yeast was added to the list after Louis Pasteur's discovery in 1857. The Bavarian law was applied throughout Germany as part of the 1871 German unification as the German Empire under Otto von Bismarck, and has since been updated to reflect modern trends in beer brewing. To this day, the Gebot is considered a mark of purity in beers, although this is controversial.
Most beers until relatively recent times were top-fermented. Bottom-fermented beers were discovered by accident in the 16th century after beer was stored in cool caverns for long periods; they have since largely outpaced top-fermented beers in terms of volume. For further discussion of bottom-fermented beers, see Pilsner and Lager.
Documented evidence and recently excavated tombs indicate that the Chinese brewed alcoholic drinks from both malted grain and grain converted by mold from prehistoric times, but that the malt conversion process was largely considered inefficient in comparison with the use of molds specially cultivated on rice carrier (the resulting molded rice being called 酒麴 (Jiǔ qū) in Chinese and Koji in Japanese) to convert cooked rice into fermentable sugars, both in the amount of resulting fermentable sugars and the residual by products (the Chinese use the dregs left after fermenting the rice, called 酒糟 (Jiǔzāo), as a cooking ingredient in many dishes, frequently as an ingredient to sauces where Western dishes would use wine), because the rice undergoes starch conversion after being hulled and cooked, rather than whole and in husks like barley malt. Furthermore, the hop plant being unknown in East Asia, malt-based alcoholic drinks did not preserve well over time, and the use of malt in the production of alcoholic drinks gradually fell out of favor in China until disappearing from Chinese history by the end of the Tang dynasty. The use of rice became dominant, such that wines from fruits of any type were historically all but unknown except as imports in China.
The production of alcoholic drink from cooked rice converted by microbes continues to this day, and some classify the different varieties of Chinese 米酒 (Mǐjiǔ) and Japanese sake as beer since they are made from converted starch rather than fruit sugars. However, this is a debatable point, and such drinks are generally referred to as "rice wine" or "sake" which is really the generic Chinese and Japanese word for all alcoholic drinks.
The earliest evidence of beer-making in China is from around 5,000 years ago at the Mijiaya site.[58]
Some Pacific island cultures ferment starch that has been converted to fermentable sugars by human saliva, similar to the chicha of South America. This practice is also used by many other tribes around the world, who either chew the grain and then spit it into the fermentation vessel or spit into a fermentation vessel containing cooked grain, which is then sealed up for the fermentation. Enzymes in the spittle convert the starch into fermentable sugars, which are fermented by wild yeast. Whether or not the resulting product can be called beer is sometimes disputed, since:
As with Asian rice-based liquors, it does not involve malting.
This method is often used with starches derived from sources other than grain, such as yams, taro, or other such root vegetables.
Some Taiwanese tribes have taken the process a step further by distilling the resulting alcoholic drink, resulting in a clear liquor. However, as none of the Taiwanese tribes are known to have developed systems of writing, there is no way to document how far back this practice goes, or if the technique was brought from Mainland China by Han Chinese immigrants. Judging by the fact that this technique is usually found in tribes using millet (a grain native to northern China) as the ingredient, the latter seems much more likely.[citation needed]
Asia's first brewery was incorporated in 1855 (although it was established earlier) by Edward Dyer at Kasauli in the Himalayan Mountains in India under the name Dyer Breweries. The company still exists and is known as Mohan Meakin, today comprising a large group of companies across many industries.
Following significant improvements in the efficiency of the steam engine in 1765, industrialization of beer became a reality. Further innovations in the brewing process came about with the introduction of the thermometer in 1760 and hydrometer in 1770, which allowed brewers to increase efficiency and attenuation.
Prior to the late 18th century, malt was primarily dried over fires made from wood, charcoal, or straw, and after 1600, from coke.
In general, none of these early malts would have been well shielded from the smoke involved in the kilning process, and consequently, early beers would have had a smoky component to their flavors; evidence indicates that maltsters and brewers constantly tried to minimize the smokiness of the finished beer.
Writers of the period describe the distinctive taste derived from wood-smoked malts, and the almost universal revulsion it engendered. The smoked beers and ales of the West Country were famous for being undrinkable – locals and the desperate excepted. This is from "Directions for Brewing Malt Liquors" (1700):
In most parts of the West, their malt is so stenched with the Smoak of the Wood, with which 'tis dryed, that no Stranger can endure it, though the inhabitants, who are familiarized to it, can swallow it as the Hollanders do their thick Black Beer Brewed with Buck Wheat.
An even earlier reference to such malt was recorded by William Harrison, in his "Description of England", 1577:
In some places it [malt] is dried at leisure with wood alone, or straw alone, in other with wood and straw together, but, of all, the straw-dried is the most excellent. For the wood-dried malt, when it is brewed, beside that the drink is higher of colour, it doth hurt and annoy the head of him that is not used thereto, because of the smoke. Such also as use both indifferently do bark, cleave, and dry their wood in an oven, thereby to remove all moisture that should procure the fume ...
"London and Country Brewer" (1736) specified the varieties of "brown malt" popular in the city:
Brown Malts are dryed with Straw, Wood and Fern, etc. The straw-dryed is the best, but the wood sort has a most unnatural Taste, that few can bear with, but the necessitous, and those that are accustomed to its strong smoaky tang; yet it is much used in some of the Western Parts of England, and many thousand Quarters of this malt has been formerly used in London for brewing the Butt-keeoing-beers with, and that because it sold for two shillings per Quarter cheaper than Straw-dryed Malt, nor was this Quality of the Wood-dryed Malt much regarded by some of its Brewers, for that its ill Taste is lost in nine or twelve Months, by the Age of the Beer, and the strength of the great Quantity of Hops that were used in its preservation.
The hydrometer transformed how beer was brewed. Before its introduction beers were brewed from a single malt: brown beers from brown malt, amber beers from amber malt, pale beers from pale malt. Using the hydrometer, brewers could calculate the yield from different malts. They observed that pale malt, though more expensive, yielded far more fermentable material than cheaper malts. For example, brown malt (used for Porter) gave 54 pounds of extract per quarter, whilst pale malt gave 80 pounds. Once this was known, brewers switched to using mostly pale malt for all beers supplemented with a small quantity of highly coloured malt to achieve the correct colour for darker beers.
The invention of the drum roaster in 1817 by Daniel Wheeler allowed for the creation of very dark, roasted malts, contributing to the flavour of porters and stouts. Its development was prompted by a British law of 1816 forbidding the use of any ingredients other than malt and hops. Porter brewers, employing a predominantly pale malt grist, urgently needed a legal colourant. Wheeler's patent malt was the solution.
Yeast ring used by Swedish homebrewers in the 19th century to preserve the yeast between brewing sessions.
Louis Pasteur's 1857 discovery of yeast's role in fermentation led to brewers developing methods to prevent the souring of beer by undesirable microorganisms.
Bottling beer in a modern facility, 1945, AustraliaTraditional fermenting building (center) and modern fermenting building (left) in Pilsner Urquell Brewery (Czech Republic)
Many European nations have unbroken brewing traditions dating back to the earliest historical records. Beer is an especially important drink in countries such as Belgium, Germany, Austria, Ireland, the UK (England, Wales, Scotland and Northern Ireland), France, the Scandinavian countries, Poland, the Czech Republic, Spain and others having strong and unique brewing traditions with their own history, characteristic brewing methods, and styles of beer.
Unlike in many parts of the world, there is a significant market in Europe (the UK in particular) for beer containing live yeast. These unfiltered, unpasteurised brews are more challenging to handle than the commonly sold "dead" beers; "live" beer quality can suffer with poor care, but many people prefer its taste. While beer is usually matured for relatively short times (a few weeks to a few months) compared to wine, some of the stronger so-called real ales have been found to develop character and flavour over the course of as much as several decades.
World beer consumption per capita
In some parts of the world, breweries that had begun as a family business by Germans or other European émigrés grew into large companies, often passing into hands with more concern for profits than traditions of quality, resulting in a degradation of the product.
In 1953, New Zealander Morton Coutts developed the technique of continuous fermentation. Coutts patented his process, which involves beer flowing through sealed tanks, fermenting under pressure, and never coming into contact with the atmosphere, even when bottled. His process was introduced in the US and UK, but is now used for commercial beer production only in New Zealand.[60]
In some sectors brewers are reluctant to embrace new technology for fear of losing the traditional characteristics of their beer. For example, Marston's Brewery in Burton on Trent still uses open wooden Burton Union sets for fermentation in order to maintain the quality and flavour of its beers, while Belgium's lambic brewers go so far as to expose their brews to outside air in order to pick up the natural wild yeasts which ferment the wort. Traditional brewing techniques protect the beer from oxidation by maintaining a carbon dioxide blanket over the wort as it ferments into beer.
Today, the brewing industry is a huge global business, consisting of several multinational companies, and many thousands of smaller producers ranging from brewpubs to regional breweries. Advances in refrigeration, international and transcontinental shipping, marketing and commerce have resulted in an international marketplace, where the consumer has literally hundreds of choices between various styles of local, regional, national and foreign beers.
United States
Prior to Prohibition, there were thousands of breweries in the United States, mostly brewing heavier beers than modern US beer drinkers are used to. Beginning in 1920, most of these breweries went out of business, although some converted to soft drinks and other businesses. Bootlegged beer was often watered down to increase profits, beginning a trend, still on-going today, of the American markets heavily advertising the weaker beers and keeping them popular. Consolidation of breweries and the application of industrial quality control standards have led to the mass-production and the mass-marketing of huge quantities of light lagers. Advertising became supreme, and bigger companies fared better in that market. The decades after World War II saw a huge consolidation of the American brewing industry: brewing companies would buy their rivals solely for their customers and distribution systems, shutting down their brewing operations.[61] Despite the record increases in production between 1870 and 1895, the number of firms fell by 46%. Average brewery output rose significantly, driven partly by a rapid increase in output by the largest breweries. As late as 1877, only four breweries topped 100,000 barrels annually. By 1895, the largest sixteen firms had greatly increased their productive capacity and were all brewing over 250,000 barrels annually;[62] and imports have become more abundant since the mid-1980s. The number of breweries has been claimed as being either over 1,500 in 2007 or over 1,400 in 2010, depending on the source. As of June 2013, The Brewers Association reports the total number of currently operating US breweries to be 2,538, with only 55 of those being non-craft breweries.[63][64][65][66]
The Finnish epic Kalevala, collected in written form in the 19th century but based on oral traditions many centuries old, devotes more lines to the origin of beer and brewing than it does to the origin of mankind.
The mythical Flemish king Gambrinus (from Jan Primus (John I)), is sometimes credited with the invention of beer.
In Egyptian mythology, the immense blood-lust of the fierce lioness goddess Sekhmet was only sated after she was tricked into consuming an extremely large amount of red-coloured beer (believing it to be blood): she became so drunk that she gave up slaughter altogether and became docile.
In Norse mythology the sea god Ægir, his wife Rán, and their nine daughters, brewed ale (or mead) for the gods. In the Lokasenna, it is told that Ægir would host a party where all the gods would drink the beer he brewed for them. He made this in a giant kettle that Thor had brought. The cups in Ægir's hall were always full, magically refilling themselves when emptied. Ægir had two servants in his hall to assist him; Eldir [Fire-Kindler] and Fimafeng [Handy].
The word beer comes from old Germanic languages, and is with variations used in continental Germanic languages, bier in German and Dutch, but not in Nordic languages. The word was imported into the British Isles by tribes such as the Saxons. It is disputed where the word originally comes from.
Many other languages have borrowed the Dutch/German word, such as French bière, Italian birra, Romanian "bere" and Turkish bira. The Nordic languages have öl/øl, related to the English word ale. Spanish, Portuguese and Catalan have words that evolved from Latin cervisia, originally of Celtic origin. Slavic languages use pivo with small variations, based on a pre-Slavic word meaning "drink" and derived from the verb meaning "to drink".
Chuvash "pora" its r-Turkic counterpart, which may ultimately be the source of the Germanic beer-word.[67]
Jofroi of Waterford, a Paris-based Dominican who about 1300 wrote a catalogue of all the known wines and ales of Europe, describing them with great relish, and recommending them to academics and counselors.
^Mirsky, Steve (1 May 2007). "Ale's Well with the World". Scientific American. 296 (5): 102. Bibcode:2007SciAm.296e.102M. doi:10.1038/scientificamerican0507-102. Retrieved 4 November 2007. 'Beer is the basis of modern static civilization,' began Bamforth, Anheuser-Busch Endowed Professor of Brewing Science at the University of California, Davis. 'Because before beer was discovered, people used to wander around and follow goats from place to place. And then they realized that this grain [barley] could be grown and sprouted and made into a bread and crumbled and converted into a liquid which gave a nice, warm, cozy feeling. So gone were the days that they followed goats around. They stayed put while the grain grew and while the beer was brewed. And they made villages out of their tents. And those villages became towns, and those towns became cities. [...]'
^Dornbusch, Horst (27 August 2006). "Beer: The Midwife of Civilization". Assyrian International News Agency. Retrieved 4 November 2007. [...] as cultural beings we have not been around for more than perhaps the last 10,000 years...and, incredibly, beer-making has been around just as long, but apparently not longer! [...] Eventually the Sumerians produced more grain than they could consume themselves, either in solid or in liquid form. So they began to trade the fruits of the earth with neighboring people, mostly Semitic tribes to the north. To organize their massive collective efforts, they developed humanity's first large-scale cities, at least 7,000 years ago. The earliest carbon-dated remnants of such civilized habitations go back to between 4000 and 5000 BC, but they probably thrived long before then. [...] We know from Sumerian records that, by the fourth millennium BC, this industrious society of scribes, farmers, and brewers used as much as half its annual grain harvest for beer. [...] Because we consider the dawn of Sumerian culture also the dawn of man's recorded history, there is sound reason to think that beer and human civilization began at roughly the same time...and humanity hasn't stopped brewing since. If for no other reason than that beer is intimately connected with the transition of mankind from primitive to civilized society, beer has a very special place in anthropology. As is clear from the archeological evidence, man and beer have had a close and unique relationship ever since the very beginning of society, and the link has been powerful and influential.
^Protz, Roger (2004). "The Complete Guide to World Beer". Archived from the original on 12 October 2016. Retrieved 30 September 2008. When people of the ancient world realised they could make bread and beer from grain, they stopped roaming and settled down to cultivate cereals in recognisable communities.
^"Beer-history". Dreher Breweries. Archived from the original on 9 July 2009. Up until the 14th century, beer – one of natural 'euphoric drinks' of mankind – was made at home. In those days, as most household tasks, beer production was the job of women. It was like this in our country as well.
Arnold, John P. 1911. Origin and History of Beer and Brewing: From Prehistoric Times to the Beginning of Brewing Science and Technology. Chicago: Alumni Association of the Wahl-Henius Institute of Fermentology. ISBN0-9662084-1-2 | The first written records of brewing come from Mesopotamia (ancient Iraq), with the oldest in the Sumerian language from approximately 4,000 BC.[16] These include early evidence of beer in the 3,900-year-old Sumerian poem honoring Ninkasi, the patron goddess of brewing, which contains the oldest surviving beer recipe, describing the production of beer from barley via bread.[17]
"Ninkasi, you are the one who pours out the filtered beer of the collector vat... It is [like] the onrush of Tigris and Euphrates. "[18]
Approximately 5,000 years ago, workers in the city of Uruk were paid by their employers in beer.[19] Beer is also mentioned in the Epic of Gilgamesh, in which the 'wild man' Enkidu is given beer to drink. "... he ate until he was full, drank seven pitchers of beer, his heart grew light, his face glowed and he sang out with joy. "[16]
Confirmed written evidence of ancient beer production in Armenia can be obtained from Xenophon in his work Anabasis (5th century BC) when he was in one of the ancient Armenian villages in which he wrote:
There were stores within of wheat and barley and vegetables, and wine made from barley in great big bowls; the grains of barley malt lay floating in the beverage up to the lip of the vessel, and reeds lay in them, some longer, some shorter, without joints; when you were thirsty you must take one of these into your mouth, and suck. | yes |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://www.worldhistory.org/article/223/beer-in-the-ancient-world/ | Beer in the Ancient World - World History Encyclopedia | Beer in the Ancient World
Contents
Article
The intoxicant known in English as `beer' takes its name from the Latin `bibere' (by way of the German `bier') meaning `to drink' and the Spanish word for beer, cerveza' comes from the Latin word `cerevisia' for `of beer', giving some indication of the long span human beings have been enjoying the drink.
Even so, beer brewing did not originate with the Romans but began thousands of years earlier. The Chinese brewed a type of beer but the product which became the most popular is credited to the Sumerians of Mesopotamia and most likely began over 10,000 years ago. The site known as Godin Tepe (in modern-day Iran) has provided evidence of beer brewing c. 3500 while sites excavated in Sumer suggest an even earlier date based on ceramics considered the remains of beer jugs and residue found in other ancient containers. Even so, the date of c. 4000 BCE is usually given for the creation of beer.
The craft of beer brewing traveled to Egypt through trade and the Egyptians improved upon the original process, creating a lighter product that enjoyed great popularity. Although beer was known afterwards to the Greeks and Romans, it never gained the same kind of following as those cultures preferred wine and thought of beer as a "barbarian" drink. One of the many peoples they regarded as "barbarians" - the Germans - perfected the art of brewing and created what is recognized today as beer.
First Beer Brewing
The first beer in the world was brewed by the ancient Chinese around the year 7000 BCE (known as kui). In the west, however, the process now recognized as beer brewing began in Mesopotamia at the Godin Tepe settlement now in modern-day Iran between 3500 - 3100 BCE. Evidence of beer manufacture has been confirmed between these dates but it is probable that the brewing of beer in Sumer (southern Mesopotamia, modern-day Iraq) was in practice much earlier.
Some evidence has been interpreted, however, which sets the date of beer brewing at Godin Tepe as early as 10,000 BCE when agriculture first developed in the region. While some scholars have contended that beer was discovered accidentally through grains used for bread-making which fermented, others claim that it preceded bread as a staple and that it was developed intentionally as an intoxicant. The scholar Max Nelson writes:
Fruits often naturally ferment through the actions of wild yeast and the resultant alcoholic mixtures are often sought out and enjoyed by animals. Pre-agricultural humans in various areas from the Neolithic Period on surely similarly sought out such fermenting fruits and probably even collected wild fruits in the hopes that they would have an interesting physical effect (that is, be intoxicating) if left in the open air. (9)
This theory of the intentional brewing of intoxicants, whether beer, wine, or other drink, is supported by the historical record which strongly suggests that human beings, after taking care of their immediate needs of food, shelter, and rudimentary laws, will then pursue the creation of some type of intoxicant. Although beer as it is recognized in the modern day was developed in Europe (specifically in Germany), the brew was first enjoyed in ancient Mesopotamia.
Love History?
Sign up for our free weekly email newsletter!
Mesopotamian Beer Rations Tablet
Osama Shukir Muhammed Amin (Copyright)
Beer in Mesopotamia
The people of ancient Mesopotamia enjoyed beer so much that it was a daily dietary staple. Paintings, poems, and myths depict both human beings and their gods enjoying beer which was consumed through a straw to filter out pieces of bread or herbs in the drink. The brew was thick, of the consistency of modern-day porridge, and the straw was invented by the Sumerians or the Babylonians, it is thought, specifically for the purpose of drinking beer.
The famous poem Inanna and the God of Wisdom describes the two deities drinking beer together and the god of wisdom, Enki, becoming so drunk he gives away the sacred meh (laws of civilization) to Inanna (thought to symbolize the transfer of power from Eridu, the city of Enki, to Uruk, the city of Inanna). The Sumerian poem Hymn to Ninkasi is both a song of praise to the goddess of beer, Ninkasi, and a recipe for beer, first written down around 1800 BCE.
In the Sumerian/Babylonian The Epic of Gilgamesh, the hero Enkidu becomes civilized through the ministrations of the temple harlot Shamhat who, among other things, teaches him to drink beer. Later in the story, the barmaid Siduri counsels Gilgamesh to give up his quest for the meaning of life and simply enjoy what it has to offer, including beer.
The Sumerians had many different words for beer from sikaru to dida to ebir (which meant `beer mug') and regarded the drink as a gift from the gods to promote human happiness and well being. The original brewers were women, the priestesses of Ninkasi, and women brewed beer regularly in the home as part of their preparation of meals. Beer was made from bippar (twice-baked barley bread) which was then fermented and beer brewing was always associated with baking. The famous Alulu beer receipt from the city of Ur in 2050 BCE, however, shows that beer brewing had become commercialized by that time. The tablet acknowledges receipt of 5 Silas of `the best beer' from the brewer Alulu (five Silas being approximately four and a half litres).
Under Babylonian rule, Mesopotamian beer production increased dramatically, became more commercialized, and laws were instituted concerning it as paragraphs 108-110 of the Code of Hammurabi make clear:
108
If a tavern-keeper (feminine) does not accept grain according to gross weight in payment of drink, but takes money, and the price of the drink is less than that of the grain, she shall be convicted and thrown into the water.
109
If conspirators meet in the house of a tavern-keeper, and these conspirators are not captured and delivered to the court, the tavern-keeper shall be put to death.
110
If a "sister of a god" open a tavern, or enter a tavern to drink, then shall this woman be burned to death.
Law 108 had to do with those tavern keepers who poured `short measures' of beer in return for cash instead of grain (which could be weighed and held to a measure) to cheat their customers; they would be drowned if caught doing so. Beer was commonly used in barter, not for cash sale, and a daily ration of beer was provided for all citizens; the amount received depended on one's social status.
The second law concerns tavern keepers encouraging treason by allowing malcontents to gather in their establishment and the third law cited concerns women who were consecrated to, or were priestesses of, a certain deity opening a common drinking house or drinking in an already established tavern. The Babylonians had nothing against a priestess drinking beer (as, with the Sumerians, beer was considered a gift from the gods) but objected to one doing so in the same way as common women would.
The Babylonians brewed many different kinds of beer and classified them into twenty categories which recorded their various characteristics. Beer became a regular commodity in foreign trade, especially with Egypt, where it was very popular.
The Egyptian goddess of beer was Tenenit (closely associated Meskhenet, goddess of childbirth and protector of the birthing house) whose name derives from tenemu, one of the Egyptian words for beer. The most popular beer in Egypt was Heqet (or Hecht) which was a honey-flavored brew and their word for beer in general was zytum. The workers at the Giza plateau received beer rations three times a day and beer was often used throughout Egypt as compensation for labor.
The Egyptians believed that brewing was taught to human beings by the great god Osiris himself and in this, and other regards, they viewed beer in much the same way as the Mesopotamians did. As in Mesopotamia, women were the chief brewers at first and brewed in their homes, the beer initially had the same thick, porridge-like consistency, and was brewed in much the same way. Later, men took over the business of brewing and miniature carved figures found in the tomb of Meketre (Prime Minister to the pharaoh Mentuhotep II, 2050-2000 BCE) show an ancient brewery at work. According to the Metropolitan Museum of Art, describing the diorama, "The overseer with a baton sits inside the door. In the brewery two women grind flour, which another man works into dough. After a second man treads the dough into mash in a tall vat, it is put into tall crocks to ferment. After fermentation, it is poured off into round jugs with black clay stoppers" (1).
Ancient Egyptian Brewery and Bakery
Keith Schengili-Roberts (CC BY-SA)
Beer played an integral role in the very popular myth of the birth of the goddess Hathor. According to the tale (which forms part of the text of the Book of the Heavenly Cow - a version of the Great Flood myth which pre-dates the biblical tale of the Flood in the biblical book of Genesis) the god Ra, incensed at the evil and ingratitude of humanity who have rebelled against him, sends Hathor to earth to destroy his creation. Hathor sets to work and falls into an intense blood lust as she slaughters humanity, transforming herself into the goddess Sekhmet. Ra is at first pleased but then repents of his decision as Sekhmet's blood lust grows with the destruction of every town and city. He has a great quantity of beer dyed red and dropped at the city of Dendera where Sekhmet, thinking it is a huge pool of blood, stops her rampage to drink. She gets drunk, falls asleep, and wakes again as the goddess Hathor, the benevolent deity of, among other things, music, laughter, the sky and, especially, gratitude.
The association between gratitude, Hathor and beer, is highlighted by an inscription from 2200 BCE found at Dendera, Hathor's cult center: "The mouth of a perfectly contented man is filled with beer." Beer was enjoyed so regularly among the Egyptians that Queen Cleopatra VII (c.69-30 BCE) lost popularity toward the end of her reign more for implementing a tax on beer (the first ever) than for her wars with Rome which the beer tax went to help pay for (although she claimed the tax was to deter public drunkeness). As beer was often prescribed for medicinal purposes (there were over 100 remedies using beer) the tax was considered unjust.
Beer brewing traveled from Egypt to Greece (as we know from the Greek word for beer, zythos from the Egyptian zytum) but did not find the same receptive climate there. The Greeks favored strong wine over beer, as did the Romans after them, and both cultures considered beer a low-class drink of barbarians. The Greek general and writer Xenophon, in Book IV of his Anabasis, writes:
There were stores within of wheat and barley and vegetables, and wine made from barley in great big bowls; the grains of barley malt lay floating in the beverage up to the lip of the vessel, and reeds lay in them, some longer, some shorter, without joints; when you were thirsty you must take one of these into your mouth, and suck. The beverage without admixture of water was very strong, and of a delicious flavour to certain palates, but the taste must be acquired. (26-27)
Clearly, beer was not to Xenophon's taste; nor was it any more popular with his countrymen. The playwright Sophocles, among others, also refers to beer somewhat unfavorably and recommends moderation in its use. The Roman historian, Tacitus, writing of the Germans, says, "To drink, the Teutons have a horrible brew fermented from barley or wheat, a brew which has only a very far removed similarity to wine" and the Emperor Julian composed a poem claiming the scent of wine was of nectar while the smell of beer was that of a goat.
Even so, the Romans were brewing beer (cerevisia) quite early as evidenced by the tomb of a beer brewer and merchant (a Cerveserius) in ancient Treveris (modern day Trier). Excavations of the Roman military encampment on the Danube, Castra Regina (modern day Regensburg) have unearthed evidence of beer brewing on a significant scale shortly after the community was built in 179 CE by Marcus Aurelius.
Still, beer was not as popular as wine among the Celts and this attitude was encouraged by the Romans who had favored wine all along. The Celtic tribes paid enormous sums for wine provided by Italian merchants and the people of Gaul were famous for their love of Italian wines. Beer brewing continued to develop, however, in spite of the views of the elite that it was a low-class drink suitable only to barbarians and developed throughout Europe beginning in Germany.
Beer in Northern Europe
The Germans were brewing beer (which they called ol, for `ale') as early as 800 BCE as is known from great quantities of beer jugs, still containing evidence of the beer, in a tomb in the Village of Kasendorf in northern Bavaria, near Kulmbach. That the practice continued into the Christian era is evidenced by further archaeological finds and the written record. Early on, as it had been in Mesopotamia and Egypt, the craft of the brewer was the provenance of women and the Hausfrau brewed her beer in the home to supplement the daily meals.
In time, however, the craft was taken over by Christian monks, primarily, and brewing became an integral part of the Monastic life. The Kulmbacher Monchshof Kloster, a monastery founded in 1349 CE in Kulmbach, still produces their famous Schwartzbier, among other brews, today. In 1516 CE the German Reinheitsgebot (purity law) was instituted which regulated the ingredients which could legally be used in brewing beer (only water, barley, hops and, later, yeast) and, in so doing, continued the practice of legislation concerning beer which the Babylonians under Hammurabi had done some three thousand years earlier. The Germans, like those who preceeded them, also instituted a daily beer ration and considered beer a necessary staple of their diet.
From the Celtic lands (Germany through Britain, though which country brewed first is disputed) beer brewing spread, always following the same basic principles first instituted by the Sumerians: female brewers making beer in the home, use of fresh, hot water and fermented grains. The Finnish Saga of Kalewala (first written down in the 17th century CE from much older, pre-Christian, tales and consolidated in its present form in the 19th century) sings of the creation of beer at length, devoting more lines to the creation of beer than the creation of the world.
The female brewer, Osmata, trying to make a great beer for a wedding feast, discovers the use of hops in brewing with the help of a bee she sends to gather the magical plant. The poem expresses an admiration for the effects of beer which any modern-day drinker would recognize:
Great indeed the reputation
Of the ancient beer of Kalew,
Said to make the feeble hardy,
Famed to dry the tears of women,
Famed to cheer the broken-hearted,
Make the aged young and supple,
Make the timid brave and mighty,
Make the brave men ever braver,
Fill the heart with joy and gladness,
Fill the mind with wisdom-sayings,
Fill the tongue with ancient legends,
Only makes the fool more foolish.
In the Finnish saga, as in the writings of the ancient Sumerians, beer was considered a magical brew from the gods endowing the drinker with health, peace of mind and happiness. This idea was cleverly phrased by the poet A.E. Houseman when he wrote, "Malt does more than Milton can to justify God's ways to man" (a reference to the English poet John Milton and his `Paradise Lost'). From ancient Sumeria to the present day, Houseman's claim would go undisputed among those who have enjoyed the drink of the gods.
About the Author
A freelance writer and former part-time Professor of Philosophy at Marist College, New York, Joshua J. Mark has lived in Greece and Germany and traveled through Egypt. He has taught history, writing, literature, and philosophy at the college level.
Free for the World, Supported by You
World History Encyclopedia is a non-profit organization. For only $5 per month you can become a member and support our mission to engage people with cultural heritage and to improve history education worldwide.
MLA Style
License & Copyright
Submitted by Joshua J. Mark, published on 02 March 2011. The copyright holder has published this content under the following license: Creative Commons Attribution-NonCommercial-ShareAlike. This license lets others remix, tweak, and build upon this content non-commercially, as long as they credit the author and license their new creations under the identical terms. When republishing on the web a hyperlink back to the original content source URL must be included. Please note that content linked from this page may have different licensing terms. | The craft of beer brewing traveled to Egypt through trade and the Egyptians improved upon the original process, creating a lighter product that enjoyed great popularity. Although beer was known afterwards to the Greeks and Romans, it never gained the same kind of following as those cultures preferred wine and thought of beer as a "barbarian" drink. One of the many peoples they regarded as "barbarians" - the Germans - perfected the art of brewing and created what is recognized today as beer.
First Beer Brewing
The first beer in the world was brewed by the ancient Chinese around the year 7000 BCE (known as kui). In the west, however, the process now recognized as beer brewing began in Mesopotamia at the Godin Tepe settlement now in modern-day Iran between 3500 - 3100 BCE. Evidence of beer manufacture has been confirmed between these dates but it is probable that the brewing of beer in Sumer (southern Mesopotamia, modern-day Iraq) was in practice much earlier.
Some evidence has been interpreted, however, which sets the date of beer brewing at Godin Tepe as early as 10,000 BCE when agriculture first developed in the region. While some scholars have contended that beer was discovered accidentally through grains used for bread-making which fermented, others claim that it preceded bread as a staple and that it was developed intentionally as an intoxicant. The scholar Max Nelson writes:
Fruits often naturally ferment through the actions of wild yeast and the resultant alcoholic mixtures are often sought out and enjoyed by animals. Pre-agricultural humans in various areas from the Neolithic Period on surely similarly sought out such fermenting fruits and probably even collected wild fruits in the hopes that they would have an interesting physical effect (that is, be intoxicating) if left in the open air. | no |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://www.thecollector.com/history-of-beer-facts/ | 8 Facts About the History of Beer You Probably Never Knew | 8 Facts About the History of Beer You Probably Never Knew
The history of beer takes us back to the dawn of civilization. Few drinks have had a greater, or more long-lasting influence on mankind.
Dec 27, 2022 • By Vedran Bileta, MA in Late Antique, Byzantine, and Early Modern History, BA in History
The history of beer goes back to the dawn of human civilization. This popular alcoholic drink appeared in the Neolithic, at the same time as bread (if not earlier) during the agricultural revolution, which set mankind on the path towards modernity through the adoption of farming, and the creation of the first settlements.
Unsurprisingly, ancient beer was originally made from grain. Hops only became part of the recipe thousands of years later, during the Middle Ages. The Babylonians and Egyptians had dozens of recipes for beer, and pharaohs were buried with jars filled with the tasty brew. Even workers were paid in beer. During the Middle Ages, beer spread all over Europe, and beermaking became one of the most important industries on the continent. After the Industrial revolution, it went global, turning into a 20th-century behemoth. Today, beer is the third most widely consumed drink, second only to water and tea.
1. The History of Beer Begins at the Dawn of Civilization
Cylinder seal (left) and modern impression (right) depicting two people drinking beer through long straws, found in Khafajeh, Iraq, ca. 2600–2350 BCE, via Oriental Institute of the University of Chicago
The history of beer is inseparable from the history of the human race. In fact, it is one of mankind’s oldest beverages. The popular fermented drink was not invented but discovered. History is not sure of the exact date or even the culture. However, it is known that the discovery of beer coincided with the end of the last ice age around 10,000 BCE and the agricultural revolution that followed. The domestication of the wild cereals within the region known as the Fertile Crescent led to the emergence of the first human settlements and, eventually, the rise of the first advanced civilizations.
It also led to a fortunate by-product. People realized that when the grains got wet, they would ferment. This fermentation process transformed water into a delicious drink, and so it was that the first beer was discovered. It is possible that the discovery of beer, fueled the agricultural revolution, as early people’s thirst for a fermented brew led to the development of technologies later used to make bread.
Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
Please check your inbox to activate your subscription
Thank you!
Our first recorded evidence for beer drinking comes from a pictogram from Mesopotamia, dated 4000 BCE. It also provides evidence for ancient beer drinking techniques, which differed from ours. Instead of just drinking it from a cup, the two figures imbibe beer from a large pottery jar through reed straws. The ancient beer had grains, chaff, and other debris floating on its surface, so a straw was necessary to avoid swallowing them.
2. Beer Was an Early Symbol of Friendship
A plaque showing a banquet scene, found in Khafajeh, Iraq, ca. 2600–2350 BC, via the Oriental Institute of the University of Chicago
Sumerian depictions of two people drinking through straws from a shared vessel suggests another important role of beer among the ancients — its social function. The straws were useful if not essential at the very beginnings of beermaking, but by the Sumerian period, the technique was refined, and the advent of pottery made them obsolete. That beer drinkers are, nonetheless, so widely depicted using straws, points to the social dimension of the beer drinking ritual.
Unlike food, people can share beverages. Thus, unlike meat, where some parts are more desirable than others, the drink always tastes the same. This unique quality led to beer becoming a symbol of friendship and hospitality. By drinking from the same vessel, the host would show his guest that the drink was not poisoned or low quality. The person offering the drink could be trusted. Gradually, beer began to be served in individual cups, but the custom of sharing a drink from one vessel persisted up to the present day.
People still drink beer from the same vat, tea or coffee from the same pot, or a glass of wine or whiskey from a shared bottle. When drinking alcohol with family, friends, or colleagues, the clinking of glasses symbolically reunites the individual glasses into a single vessel of shared liquid, reminding us of ancient drinkers from the long history of beer.
3. Beer Was a Currency
Clay tablet from Uruk, record of beer rations, ca. 3100-3000 BCE, via the British Museum
The sedentary way of life and the surplus of cereal grains — barley and wheat – freed a small fraction of the population from the need to work in the fields, allowing for the emergence of highly specialized professions — priests, administrators, and scribes. One of their main tasks was to collect taxes, paid in grain and other goods, and their processed solid and liquid forms — bread and beer. This made beer more than just a simple foodstuff. It became a convenient and widespread form of payment and currency in all the empires of the Fertile Crescent, from Sumer to Egypt.
The importance of beer is recorded in one of the first recorded laws. The famous Code of Hammurabi decreed a daily beer ration to citizens of ancient Babylon. The drink was distributed according to social standing: common laborers received two liters daily, while priests and bureaucrats got five. In ancient Egypt, beer was essential for laborers, like those who built the pyramids of Giza, who were provided with a daily ration of over 10 pints of the tasty brew. Despite traditional belief, it was not slaves but a paid labor force that built some of the most iconic buildings in world history. And it was the beer that fueled that labor.
4. Beer Was Divine
Besides its economic and social value, ancient beer also had divine status. Partly this was due to the brew being a cornerstone of people’s diets. In addition, the ancients considered beer a safer alternative to water, as nearby rivers and canals could often become contaminated by animal waste. The process of fermentation boiled out harmful microorganisms while preserving nutrients absent from other drinks. Thus, it is unsurprising that, besides its use in religious ceremonies and rituals, beer was associated with the gods.
In fact, the first written beer recipe about beer comes from a poem the Hymn to Ninkasi a 3800-year-old ode to the Sumerian goddess of beer. The text, etched into clay tablets, praises the goddess and conveniently outlines the steps for brewing in such detail that modern researchers were able to recreate the ancient brewing process. Apparently, the Sumerians brewed good beer.
5. The Ancient History of Beer: Beloved by Egyptians, But Not Greeks and Romans
The ancient Egyptians, too, had a dedicated goddess of beer and drunkenness — Hathor. Actually, few early civilizations loved their beer as much as the society living along the Nile valley. Even the children were allowed to drink beer. Egyptian records mention at least seventeen kinds of beer, some of them referred to in poetic terms that remind us of advertising slogans such as “the beautiful and good”, “the heavenly”, “the joy-bringer”, “the plentiful”, and “the fermented”. The same applied to the beers used in religious ceremonies.
However, the ancient Greeks and Romans did not share the Egyptian taste for beer, preferring wine instead. To the Greeks, who appreciated moderation, beer, was a drink for the “barbarians”. The Romans shared the opinion of their Mediterranean neighbors. However, the Romans brought the brewing process to the northern limits of their Empire, spreading beer culture in colder areas where it was difficult, if not impossible, to plant vineyards.
6. Beer is a Medieval Success Story
The arrival of the Middle Ages revived Europe’s interest in beer. Surprisingly, Christianity was one of the main “culprits,” primarily due to the monks who modernized the brewing process, adding hops. As a result, medieval monasteries, places of frugal life and asceticism, became the first European breweries. The fast, prescribed by strict monastic rules, did not allow the consummation of food. But no regulation forbade beer, which also had considerable nutritious value. In addition, many monasteries north of the Alps were in areas suitable for beer brewing, such as Bavaria and Bohemia.
The monasteries were also one of the principal places of medieval education and repositories of knowledge. Thus, monks could experiment and further improve the brewing process. In the ninth century, brewers began to flavor beer with hops. In the process, they discovered its preserving properties.
In addition, monasteries were also stopping places on pilgrimage routes, and beer served as welcome nourishment for weary travelers. Thus, monasteries made beer a favorite alcoholic drink of the Middle Ages, available to all, from kings, bishops, and nobility, to the common folk.
7. Bavaria Introduced the Golden Standard for Beer
For centuries the basic way to make beer was to boil malted barley with water and let it ferment. Sometimes, natural yeast did the vital work, but generally, the brewers would add yeast to speed up the process. The resulting mix would then be flavored with a mixture of various herbs. Adding hops improved the chances that the beer would not spoil, but the large variety of recipes continued to make beer-making difficult.
This all changed in the 16th century when to standardize beer production, the Duke of Bavaria, Wilhelm IV, introduced the Reinheitsgebot, or beer purity law, in 1516. The Reinheitsgebot essentially removed everything but water, hops, and barley from the list of acceptable brewing ingredients. Interestingly, the law also removed yeast from the recipe, which was only added back in, in 1857, after Louis Pasteur discovered yeast’s role in fermentation.
The Reinheitsgebot remained law for the next 471 years and was repealed only in 1987. However, while brewers respected the law in Germany, Holland, and Belgium, French brewers continued to use various herbs, spices, and fruits to add extra flavor to their beer.
8. The War on Beer: The History of Beer in Modern Times
A map of the most popular beer brands in every country, via Business Insider
Despite its sky-high popularity in the medieval and early modern period, and the spread of breweries and pubs all over Europe, beer continued to encounter resistance in its mission to take over the world. During the 18th century’s Enlightenment, Europe tried to restrict alcohol, focusing instead on coffee and tea.
Yet, beer continued to rise in popularity, and in 1765, the invention of the steam engine sparked the Industrial Revolution and, by extension, industrialized the beer brewing process. In addition, the introduction of the thermometer and hydrometer was a watershed moment in the history of beer, increasing beer-making efficiency.
The most dangerous blow to the history of beer came in the 1920s, during the Prohibition era in the United States. Although George Washington owned a brewery and Thomas Jefferson wrote the Declaration of Independence in a pub, the “land of the free” declared war on beer and other alcohol, making its consumption illegal. Prohibition (which ended in 1933) had an unintended effect. It introduced people to watered-down beer, with a lighter flavor profile, present even today, especially among mass-marketed beers.
Fortunately, in recent years, another invention has taken place. Craft beer, made in small private breweries (but also by big behemoth companies), made serious gains in the market, yielding a historically unprecedented diversity of styles. Craft brewers are even reviving ancient recipes: recently, brewers made a beer using the Ninkasi poem’s recipe, thus bringing the history of beer full circle.
By Vedran BiletaMA in Late Antique, Byzantine, and Early Modern History, BA in HistoryVedran is a doctoral researcher, based in Budapest. His main interest is Ancient History, in particular the Late Roman period. When not spending time with the military elites of the Late Roman West, he is sharing his passion for history with those willing to listen. In his free time, Vedran is wargaming and discussing Star Trek. | It also led to a fortunate by-product. People realized that when the grains got wet, they would ferment. This fermentation process transformed water into a delicious drink, and so it was that the first beer was discovered. It is possible that the discovery of beer, fueled the agricultural revolution, as early people’s thirst for a fermented brew led to the development of technologies later used to make bread.
Get the latest articles delivered to your inbox
Sign up to our Free Weekly Newsletter
Please check your inbox to activate your subscription
Thank you!
Our first recorded evidence for beer drinking comes from a pictogram from Mesopotamia, dated 4000 BCE. It also provides evidence for ancient beer drinking techniques, which differed from ours. Instead of just drinking it from a cup, the two figures imbibe beer from a large pottery jar through reed straws. The ancient beer had grains, chaff, and other debris floating on its surface, so a straw was necessary to avoid swallowing them.
2. Beer Was an Early Symbol of Friendship
A plaque showing a banquet scene, found in Khafajeh, Iraq, ca. 2600–2350 BC, via the Oriental Institute of the University of Chicago
Sumerian depictions of two people drinking through straws from a shared vessel suggests another important role of beer among the ancients — its social function. The straws were useful if not essential at the very beginnings of beermaking, but by the Sumerian period, the technique was refined, and the advent of pottery made them obsolete. That beer drinkers are, nonetheless, so widely depicted using straws, points to the social dimension of the beer drinking ritual.
Unlike food, people can share beverages. Thus, unlike meat, where some parts are more desirable than others, the drink always tastes the same. This unique quality led to beer becoming a symbol of friendship and hospitality. | no |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://www.citybrewtours.com/christmas/christmas9/ | A History of German Beer Styles - City Brew Tours - North America's ... | A History of German Beer Styles
Weissbier’s Rise to the Top
While Germany is considered by many as one of the world’s epicenters of beer, brewing did not originate in Germany. The history of brewing dates back to the Ancient Sumerians, the oldest known civilization on Earth, who produced the oldest known beer recipe in a 3,900 year old poem/ode to the goddess of brewing, Ninkasi. Believe it or not, the Sumerians were also the first to brew a wheat beer, or Weissbier in German, which is often associated strictly with Bavaria. German Weissbiers are obviously more complex than their Ancient Sumerian predecessors and are legally required to be at least 50% malted wheat and are most often unfiltered and appear hazy due to the excess yeast left in the beer.
Germany’s Weissbier history begins in the 12th century in Bohemia, where the Pilsner originated as well. The style then spread to Bavaria where in 1520 the noble Degenberg family obtained the exclusive rights to brew wheat beer which they profited off of for 80 years. The rights were reclaimed by the Wittelsbach family, the original owners and ruling class of Bavaria, who monopolized the wheat beer brewing process. Weissbier were to be poured by every innkeeper in Bavaria and bought exclusively from the network of Noble owned breweries for the next 200 years. By 1798, Weissbier’s prominence waned and the dukes began selling the brewing rights to monasteries and private breweries.
This was around the time during which Weihenstephan Brewing, well known for its Hefe Weissbier, fell under the control of the state of Bavaria. Weihenstephan is the oldest brewery in the world and continues to produce world class beers to this day. Weihenstephan began brewing officially in 1040 when the monks at the Weihenstephan Monastery, in the city of Freising, obtained a license to brew and sell beer. Their documented history of brewing at the monastery dates back even further to 768 AD when there was the first recorded mention of hops at the monastery. That year the Church began collecting 10% of the yearly hop produce from a nearby hop farm as a tax.
After more than a hundred-year lull in popularity of Weissbier, during which Georg Schneider and his successors kept the faith alive with their production of the classic Schneider Weisse, the 1960s saw an immense surge in demand for Weissbier. Weissebier is now once again the most popular beer style in Bavaria and comprises 33% of the overall beer market in the region. It is immensely popular in the rest of Germany as well as with craft brewers across the globe.
Bavaria: The Birthplace of Lager Brewing
While Weissbier, a top fermenting ale, may currently be the most popular beer style in Bavaria, the region is much more well known for its long history of lager brewing and the development of some of the most popular beer styles in the world. Classic styles such as the Märzen, Helles, Rauchbier, Kellerbier, Dunkel, Schwarzbier, and Bock all can trace their lineage to this southern region. The start of Bavaria’s brewing history, as well as that of continental Europe, begins in 800 BC. An archeological dig unearthed amphorae, tall clay jugs, that have been determined they were used to hold “beer-like” liquids. From then on, brewing in the region continued to expand and evolve. While modern styles as we know them today did not develop until the 18th or 19th century, the tradition of lager brewing goes back further due to both the region’s climate and the previously discussed feudal control of brewing.
Prior to Louis Pasteur’s discovery of the importance of yeast and the advent of refrigeration, Bavarian brewers took advantage of their cold winters in the shadow of the Alps and would brew the majority of their beer in the winter months. They would then bury their beer in mountain caves, or they would dig cellars and fill them with ice to keep them stable during the warm summer months. As a result of the cool temperatures brewers unknowingly “selected” wild, bottom fermenting yeast that thrived in cooler temperatures and fermented slower. This process is known as lagering – lager is the German word for storeroom or warehouse – and the wild yeasts that naturally thrived in these cooler temperatures were harvested and later developed into pure yeast strains.
Beer brewed during the summer often soured quickly as the natural yeast and other bacteria in the air would go into overdrive eating the sugars in the beer, whereas they would lay dormant in the cooler temperatures. The adherence to brewing in the cooler months and lagering in caves became law on April 23rd, 1516, when Duke Wilhelm IV of Bavaria passed the Reinheitsgebot, or German Purity Laws, which regulated the production of beer for the entirety of Bavaria. The Reinheitsgebot stated that only Barley, Hops, and Water were to be used in the brewing process (yeast was later added once its importance was understood) as well as regulating the price of beer and enforcing confiscation as a penalty for making impure beer. Duke Wilhelm’s successor Albrecht V took the Reinheitsgebot a step further by outright banning brewing between April 23rd and September 29th.
This decree led to the advent of several new styles of lager, namely the original Marzen and the Dunkel and its cousins the Rauchbier and Schwarzbier. Marzen, or March beer, developed as a result of the decree since brewers would ramp up production in March to create stronger beers that would store well in their lagering caves and tunnels in the summer months. The modern style of Marzen was developed by Gabriel Sedlmayer, the owner and head brewer at Spaten in Munich, in 1841 when he introduced it at that year’s Oktoberfest celebration. This beer set the standard for the style and led to the 1872 creation of Spaten Oktoberfestbier, the world’s first Oktoberfest beer and a recipe that is still used to this day. This malty, amber hued beer was a perfect segue into the fall and winter season where Dunkels had their time to shine.
Dunkels are the original winter lager as they were very malty with nutty and bread-like flavors. The dark colors came from the roasting of dark malts over open flame. Depending on the darkness of the roasted malt or its exposure to smoke, dunkels began to be classified as Schwarzbiers (Black Beer) or Rauchbiers (Smoke Beer). In modern days and as brewing practices became more of a science, brewers are able to get precisely the right amount of dark color and smoke out of their roasted malt to dial in these classifications creating two distinct styles. As a result of their prominence and the winter only brewing decree, Dunkels were the most popular beer style in Bavaria until 1894 when Spaten once again disrupted the German brewing scene by introducing the Helles Lager. This pale, straw colored beer is very clean and crisp tasting and a stark visual contrast to the dunkel.
The Helles style was Bavaria’s answer to the popular Czech Pilsner, which had begun to infiltrate Germany. At first, the people of Munich were not at all receptive to the new pale beer that looked and tasted nothing like their beloved dunkels. In fact, the Association of Munich Breweries held a meeting in late 1895 to declare that no brewery was to produce any type of pale lager. This declaration did not stick as many brewers went ahead and brewed Helles as they saw it as the future of beer. At the turn of the century more brewers began to change their tune and adopted the style which now holds equal weight with pilsner in the Bavarian beer market. While helles has captured the attention of American craft brewers, it is not at all prominent in the rest of Germany.
On top of developing major beer styles, Bavaria revolutionized other aspects of brewing as well. The Reinheitsgebot and Albrecht’s decree led brewers to be more inventive in controlling temperatures leading to the creation of the first industrial refrigeration system. Once again Spaten took charge to innovate the industry when Gabriel Sedlmayer hired Carl Von Linde to install refrigeration in Spaten’s lagering cellar in 1873.This breakthrough in temperature control, paired with the ever increasing understanding of yeast strains, gave German Brewers the ability to further experiment and hone their craft thus creating the modern day styles of German beer. Additionally, Bavaria is the world’s largest producer of hops accounting for one third of hops used in brewing across the globe. They are also a major producer of barley, wheat and specialty malts that are regarded as some of the best in the world. With all this rich history and tradition, it is clear to see why Bavaria continues to set the standard for German lagers.
Traditional Styles
German Ales
Weissbier (Wheat Beer) – Known to Americans as Hefeweizen this style is tart, spicy, and fruity with flavors of banana and clove due to its high content of active yeast.
Berliner Weiss – A low ABV version of a Weissbier that often is enjoyed with a splash of sweet fruit syrup to cut the bitterness
Kölsch – A light, crispy and clear wheat ale that is quite refreshing and native to the city of Cologne
Gose – A tart and sour beer with salt and coriander typically added to create its citrusy and spiced aroma
German Lagers
Bock/Doppelbock – Dark, heavy, and malty beers enjoyed during the winter months as they contain sweet, toasty and often caramel flavors and aromas
Export Lager – Malty, crisp and quite dry lager that is brewed with noble hops (low bitterness/high aroma hops) and was originally brewed for working class people in the city of Dortmunder
Pilsner – Originating in the now Czech Republic city of Pilsen, Pilsners are the most popular beer in Germany and many nations across the world. They are pale to golden in color and are light, crisp beers with a floral aroma
Märzen – First brewed by Spaten, these mildly hoppy, toasty, and often copper colored lagers are enjoyed during the fall and particularly during Oktoberfest celebrations due to their strong ties to Munich
Helles – A pale crisp lager akin to a Pilsner that also originated in the City of Munich
Schwarzbier – Literally “Black Beer” in English, this beer uses long roasted dark malts to obtain its color, but its flavors are mild and not as rich or roasty as similarly colored styles, such as stouts or porters
Rauchbier – A lager that utilizes malt that has been smoked over open flames thus imparting strong smoky and rich malt flavors in this unique brew | A History of German Beer Styles
Weissbier’s Rise to the Top
While Germany is considered by many as one of the world’s epicenters of beer, brewing did not originate in Germany. The history of brewing dates back to the Ancient Sumerians, the oldest known civilization on Earth, who produced the oldest known beer recipe in a 3,900 year old poem/ode to the goddess of brewing, Ninkasi. Believe it or not, the Sumerians were also the first to brew a wheat beer, or Weissbier in German, which is often associated strictly with Bavaria. German Weissbiers are obviously more complex than their Ancient Sumerian predecessors and are legally required to be at least 50% malted wheat and are most often unfiltered and appear hazy due to the excess yeast left in the beer.
Germany’s Weissbier history begins in the 12th century in Bohemia, where the Pilsner originated as well. The style then spread to Bavaria where in 1520 the noble Degenberg family obtained the exclusive rights to brew wheat beer which they profited off of for 80 years. The rights were reclaimed by the Wittelsbach family, the original owners and ruling class of Bavaria, who monopolized the wheat beer brewing process. Weissbier were to be poured by every innkeeper in Bavaria and bought exclusively from the network of Noble owned breweries for the next 200 years. By 1798, Weissbier’s prominence waned and the dukes began selling the brewing rights to monasteries and private breweries.
This was around the time during which Weihenstephan Brewing, well known for its Hefe Weissbier, fell under the control of the state of Bavaria. Weihenstephan is the oldest brewery in the world and continues to produce world class beers to this day. | yes |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | http://www.sodasandbeers.com/SABBottleHistoriesBeer.htm | Beer Bottle Histories - North American Soda & Beer Bottles | Bottle & Product Histories Beer
Beer was brewed from ancient times and no doubt it was
bottled soon afterwards.
The first records of brewing are about 6,000 years old and refer to
the Sumerians. Sumer was between the Tigris and Euphrates rivers, and
in the area of Southern Mesopotamia. An ancient clay tablet engraved
with the Sumerian language outlines the steps for making beer. This
tablet has pictographs that represent barley, baking bread, crumbled
bread being put into water and made into mash and then a drink. The
Sumerians perfected this process and are recognized as the first
civilized culture to brew beer. They brewed beer that they offered to
their gods as in a 1800 B.C. hymn to Ninkasi, the goddess of brewing.
The beer was drunk out of jars with a straw to help filter out the
sediments and soggy bread that was part of the brew.
When the Sumerian empire collapsed, the Babylonians became the rulers
of Mesopotamia and incorporated the Sumerian culture into their own.
As a result, they acquired the knowledge to brew beer. The Babylonians
brewed at least twenty different types of beer. The beers were brewed
with pure emmer (prehistoric grain type similar to spelt), pure
barley or a mixture of grains. The Babylonian king Hammurabi enacted a
law that established a daily beer ration. The higher ones rank, the
more beer that was rationed. High priests received two and a half
times the ration of a common worker. The Babylonians also exported
beer to Egypt.
The Egyptians soon learned the art of brewing and carried the
tradition into the next millennium. They continued to use bread for
brewing beer but also added dates to flavor it. The ancient Egyptians
even had a hieroglyph for the word brewer, which illustrates the
importance of brewing to the culture. Ancient Egyptian documents show
that beer and bread were part of the daily diet and was consumed by
the wealthy and poor alike. Beer was an important offering to the gods and
was placed in tombs for the afterlife.
With the rise of the Greek and Roman Empires, beer continued to be
brewed, but wine was the drink of preference. In Rome itself, wine
became the drink of the gods and beer was only brewed in areas where
wine was difficult to obtain. To Romans, beer was the drink of
barbarians. Tacitus, a Roman historian, wrote about the Teutons, the
ancient Germans, and documented "a liquor from barley or other grain"
that these people drank.
During these ancient times, brewing beer was a woman's job. In some
cultures beer was brewed by priestesses in the temples. During the
Middle Ages this changed when brewing was carried on in monasteries.
It is interesting that monks were able to drink beer when fasting.
Beer was a drink and not food. This runs contrary to later beliefs
where beer was considered "liquid bread."
When Columbus first arrived in the New World, the American Indians
that he met served him a corn-based beer. The Aztecs, Incas and Mayans
had been brewing such beers for hundreds of years before the arrival
of Europeans.
Beer was considered a healthy drink for most of its history and was a
good source of nourishment. It was often advertised as good for the
sick and elderly. But perhaps it biggest health advantage was that
beer was brewed. At a time when impurities and microbes in water were
unknown, beer provided a safer drink as it was boiled as part of the
brewing process. Beer drinkers were less susceptible to waterborne
diseases and thus healthier. Over the centuries this trend was noticed
but was not understood until pasteurization was discovered.
Most beers brewed over the last four hundred years have been made of
the following ingredients:
Barley malt for fullness
Hops add bitterness
Yeast to convert barley malt sugars into alcohol
Water to serve as a medium for the fermentation process
Brewers over the years have substituted other grains for the barley.
These include corn, wheat and rice.
The early brewing centers of modern times were England, Holland and
Germany. English beers had the greatest influence on American
consumers at the country's founding and through the mid-Nineteenth
century. The first brewing center in the New World was run by
the Dutch on Manhattan Island or New Amsterdam. During the second half
of the Seventeenth Century, the Dutch were exporting some beer, but
much beer was still imported. The problem in Manhattan was getting a good supply of water and this problem was not addressed for another
150 years. Even so, the brews were various ales and beers also common
in England.
Starting around 1700, Philadelphia started to emerge as the brewing
center of the English Colonies in America. A good supply of water, the
productive farmlands that surrounded Philadelphia, a thirsty
population and the skills of the English trained brewers were
responsible for this. Soon Philadelphia beers were exported to all of
the English Colonies in America. George Washington was an ardent fan
of Philadelphia porter and ordered quantities of it for consumption at
his Mount Vernon home. The beer bottles of this period were the common
black glass bottles that were also used to bottle wine and other
spirits. In the late 1700s, the shapes of wine and beer
bottles started to evolve in different directions. Wine bottles
started to be more slender with higher shoulders, while beer bottles
tended to be shorter with lower shoulders. This beer bottle shape,
known as the
porter shape,
was associated with English
beers and remained in use until well after 1900. In the
1840s, a new English bottle form started to evolve from the porter
shape. These bottles grew taller and narrower and the neck evolved a bulge. This style was exported
to North America and is known as the export beer shape.
American glass manufactures started to produce this form in
about 1855. The earliest form of this type is known as an early
export beer. Later, in the 1880s, a bottle with a softer
shoulder
and gentler bulge in the neck evolved called the later export beer
shape. This shape endures today in many beer bottles.
During the 1830s, a new style of beer, lager, was being brewed in Germany. The
Germans had isolated a strain of yeast that produced a lighter beer.
This yeast was a bottom fermenting yeast (Saccharomyces uvarum) as
opposed to the top fermenting yeasts (Saccharomyces cerevisiae) used
to produce the heavier English style beers. In 1840, John Wagner
smuggled some of this yeast out of Germany and to Philadelphia, where
he brewed the first lager beer in America. The earliest lager beer
bottle had a distinctive shape that is called an
early lager. This
beer did not find popularity immediately in Philadelphia where the
German population was well established, but did become very popular in
the Midwest where many of the new German immigrants were settling.
Slowly, lager beers gained in popularity in the older settled areas of
the United States, but it took almost thirty years until the German
style lager beers usurped the English style beers in these areas.
Lager beer bottles of this period are called
late
lagers. By this time, the Midwestern breweries in Saint Louis and Milwaukee had a firm handle on
the market and would eventually dominate beer production in the United
States. Around 1875, a new style of beer bottle appeared in the New
York area. This style is the called the champagne beer style and
remained popular until well into the twentieth century and its shape
can be seen in many of today's beer bottles.
Around 1875 beers start to acquire trade marked names. Prior to this
point beers were advertised by their brewer, the type of beer or the
region it was from. Widely advertised types of beer included; lager,
ale, brown stout, cream ale, weiss beer and bock. Regional branding
included; Philadelphia Porter and Ale, Saint Louis Lager, Milwaukee
Lager, and Pentucket Ale. Of the branded beers, one of the most
enduring is Budweiser (1876), but others include Pabst Blue Ribbon
(1882) and Miller High Life (1903). | Bottle & Product Histories Beer
Beer was brewed from ancient times and no doubt it was
bottled soon afterwards.
The first records of brewing are about 6,000 years old and refer to
the Sumerians. Sumer was between the Tigris and Euphrates rivers, and
in the area of Southern Mesopotamia. An ancient clay tablet engraved
with the Sumerian language outlines the steps for making beer. This
tablet has pictographs that represent barley, baking bread, crumbled
bread being put into water and made into mash and then a drink. The
Sumerians perfected this process and are recognized as the first
civilized culture to brew beer. They brewed beer that they offered to
their gods as in a 1800 B.C. hymn to Ninkasi, the goddess of brewing.
The beer was drunk out of jars with a straw to help filter out the
sediments and soggy bread that was part of the brew.
When the Sumerian empire collapsed, the Babylonians became the rulers
of Mesopotamia and incorporated the Sumerian culture into their own.
As a result, they acquired the knowledge to brew beer. The Babylonians
brewed at least twenty different types of beer. The beers were brewed
with pure emmer (prehistoric grain type similar to spelt), pure
barley or a mixture of grains. The Babylonian king Hammurabi enacted a
law that established a daily beer ration. The higher ones rank, the
more beer that was rationed. High priests received two and a half
times the ration of a common worker. The Babylonians also exported
beer to Egypt.
The Egyptians soon learned the art of brewing and carried the
tradition into the next millennium. They continued to use bread for
brewing beer but also added dates to flavor it. The ancient Egyptians
even had a hieroglyph for the word brewer, which illustrates the
importance of brewing to the culture. | yes |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | yes_statement | the first "beer" was "brewed" by the sumerians.. the sumerians were the first to "brew" "beer". | https://hospitalityinsights.ehl.edu/beer-types | What are the main types of beer? A complete guide | Q : What are the main types of beer?
While craft brewing, home brewing and beer tasting have exploded in popularity in recent years, beer ultimately consists of a few basic styles. In this complete guide on the main types of beer, you'll brush up your knowledge about common and popular styles of beer to increase your familiarity with one of the world's oldest drinks.
Where did beer originate from?
Many countries have tried to stake their claim as the first creators of the amber nectar, however, the first barley beer production dates back to the period of the Sumerians, around 4,000 BCE. Sumerians were the earliest known civilization from what is now known as Iraq.
Although many believe that Germany was in fact the birthplace of beer, as it was in Germany where modern and popular beer styles were developed in the middle ages. As well as being a country renowned for its beer drinking inbedded in its cultural identity. However, history shows there were many ancient civilizations from the Sumerians, Babylonians, Eqyptians, Greeks, Romans, Chinese and more who were involved in the development of the refreshing beverage we enjoy today.
The beer market in 2023
According to Statista, Revenue in the Beer segment amounts to a hefty US$610.00bn in 2023, and, it seems, our thirst for beer is unquenchable as the market is expected to grow annually by 5.44% (CAGR 2023-2027).
For any entrepreneurs looking to enter the F&B industry, beer would be the most sensible bet. A Statista analyst said "Beer is the most important segment in the global Alcoholic Drinks market, both by volume and value. In comparison to other segments of the Alcoholic Drinks market, this segment is already quite concentrated, with the top 5 players accounting for roughly 60% of global volume – half of which is attributable to market leader AB InBev alone."
The 3 main beer types: Lager, Ale & Hybrid
Beer can be categorized by these two main types: lagers and ales. The two points of differentiation between these major beer classifications is the type of yeast and fermentation process. Ales are fermented with top-fermenting yeast at warm temperatures (60˚–70˚F), and lagers are fermented with bottom-fermenting yeast at cold temperatures (35˚–50˚F). Some beers can be classified as hybrids, containing both lager beers and ale characteristics.
Lager
Lagers are a newer style of beer with two key differences from ales. Lagers ferment for a long time at a low temperature, and they rely on bottom-fermenting yeasts, which sink to the bottom of the fermenting tank to do their magic.
The global lager market size reached US$328.4bn in 2021, and is predicted by research carried out by the Imarc Group to continue growing at a CAGR of 2.9% (2022-2027) to reach US$391.1bn by 2027. Lagers are common among European countries, including Czechia, Germany, and the Netherlands, as well as in Canada, where they make up more than half of all beer sales.
Pilsner
A subspecies of lager, pilsner beers are distinguished by their water, which varies from neutral to hard. Pilsners are among the hoppiest lagers and generally have a dry, slightly bitter flavor. Their light golden color, clear body, and crisp finish make Pilsners a popular summer beer.
American Lager
Characterized by its pale color which ranges from resembling straw to a golden hue. American lagers are highly carbonated beers with a refreshing, crisp taste. The flavor may vary from brand to brand but tend to lack the hops and malt that are present in ales. They make for easy drinking when getting together for BBQs in the garden during summer.
Ale
Ale is a general category of beer: You'll find sub-categories like brown ales or pale ales. This is the oldest style of beer, which dates back to antiquity. What distinguishes an ale - and also makes this category of beer accessible for home brewers - is a warm-temperature fermentation for a relatively short period of time. In the brewing process, brewers introduce top-fermenting yeasts which, as the name suggests, ferment on the top of the brew. The fermentation process turns what would otherwise be a barley and malt tea into a boozy beverage.
Research carried out by the Imarc Group indicated that the global craft beer market size reached US$117.1bn in 2022 and expects the market to reach US$221.5bn by 2028 exhibiting a very promising growth rate (CAGR) of 10.8% during 2023-2028. This research encapsulates Lager, Ales and Hybrid beers but the rapid growth of the craft beer category shows a thirst for weird and wonderful creations from independent breweries, and perhaps, a shift away from traditional lagers which have historically been the more popular type of beer.
Porter
A type of ale, porter beers are known for their dark black color and roasted malt aroma and notes. Porters may be fruity or dry in flavor, which is determined by the variety of roasted malt used in the brewing process.
Stout
Like porters, stouts are dark, roasted ales. Stouts taste less sweet than porters and often feature a bitter coffee taste, which comes from unmalted roasted barley that is added to the wort. They are characterized by a thick, creamy head. Ireland's Guinness may be one of the world's best-known stouts.
Blonde Ale
This easy drinking ale is a summer favorite, thanks to its light malt sweetness and trace of hops, which add aroma. As the name suggests, blonde ales have a pale color and a clear body. They tend to be crisp and dry, with few traces of bitterness, rather than hop-heavy or dank.
Brown Ales
Brown ales range in color from amber to brown, with chocolate, caramel, citrus, or nut notes. Brown ales are a bit of a mixed bag, since the different malts used and the country of origin can greatly affect the flavor and scent of this underrated beer style.
Pale Ale
An English style of ale, pale ales and known for their copper color and fruity scent. Don't let the name fool you: these beers are strong enough to pair well with spicy foods.
Related to the pale is the APA, or American Pale Ale, which is somewhat of a hybrid between the traditional English pale ale and the IPA style. American pale ales are hoppier and usually feature American two row malt.
India Pale Ale
Originally, India Pale Ale or IPA was a British pale ale brewed with extra hops. High levels of this bittering agent made the beer stable enough to survive the long boat trip to India without spoiling. The extra dose of hops gives IPA beers their bitter taste. Depending on the style of hops used, IPAs may have fruit-forward citrus flavors or taste of resin and pine.
American brewers have taken the IPA style and run with it, introducing unusual flavors and ingredients to satisfy U.S. beer drinkers' love for the brew style.
Wheat
An easy-drinking, light style of beer, wheat beers are known for a soft, smooth flavor and a hazy body. Wheat beers tend to taste like spices or citrus, with the hefeweizen or unfiltered wheat beer being one of the more common styles.
Sour Ale
An ancient style of beer that's taken off in popularity in recent years, sour ales are crafted from wild yeasts, much like sourdough bread. These beers are known for a tart tang that pairs well with tropical fruit and spices. Within sour beers, you'll find lambics, which are Belgian sour beers mixed with fruit, goses, a German sour beer made with coriander and sea salt, and Flanders, a Belgian sour beer fermented in wood tanks.
Cooking with beer
Not just a popular tipple, beer is also a common staple in the larder of many chefs. Beer makes for a useful ingredient thanks to its carbonated nature and earthy taste of the hops and barley adding depth of flavor to cooking. All beers work to tenderise and moisten meat dishes and the carbon works as a leavening agent in baking recipes, resulting in airy cakes, breads, pancakes and batters.
When it comes to using beer for taste, as with all cooking the key is finding the perfect balance of flavor. Malty dark beers like IPAs, stouts and porters are typically used in wintery stews, braises and pie fillings giving a rich and slightly sweet taste. Whereas lagers (Pilsner, Kölsch, Märzen, etc.) are dry and crisp, lending themselves to roast chicken and beer-battered fish recipes. While the beer cooks the majority of the alcohol evaporates, so no need to worry about spiking your dinner guests!
We hope this guide to beer styles has whet your appetite! To deepen your culinary and beverage knowledge, consider joining the EHL community.
This five-month intense program of 25 masterclasses will help you shape your business project thanks to management modules and the tools EHL developed for entrepreneurs. It will also immerse you in culinary operations, from fine-dining cuisine to freshly prepared takeaway food, catering, oenology and R&D. | Q : What are the main types of beer?
While craft brewing, home brewing and beer tasting have exploded in popularity in recent years, beer ultimately consists of a few basic styles. In this complete guide on the main types of beer, you'll brush up your knowledge about common and popular styles of beer to increase your familiarity with one of the world's oldest drinks.
Where did beer originate from?
Many countries have tried to stake their claim as the first creators of the amber nectar, however, the first barley beer production dates back to the period of the Sumerians, around 4,000 BCE. Sumerians were the earliest known civilization from what is now known as Iraq.
Although many believe that Germany was in fact the birthplace of beer, as it was in Germany where modern and popular beer styles were developed in the middle ages. As well as being a country renowned for its beer drinking inbedded in its cultural identity. However, history shows there were many ancient civilizations from the Sumerians, Babylonians, Eqyptians, Greeks, Romans, Chinese and more who were involved in the development of the refreshing beverage we enjoy today.
The beer market in 2023
According to Statista, Revenue in the Beer segment amounts to a hefty US$610.00bn in 2023, and, it seems, our thirst for beer is unquenchable as the market is expected to grow annually by 5.44% (CAGR 2023-2027).
For any entrepreneurs looking to enter the F&B industry, beer would be the most sensible bet. A Statista analyst said "Beer is the most important segment in the global Alcoholic Drinks market, both by volume and value. In comparison to other segments of the Alcoholic Drinks market, this segment is already quite concentrated, with the top 5 players accounting for roughly 60% of global volume – half of which is attributable to market leader AB InBev alone. | yes |
Paleoethnobotany | Was the first beer brewed by the Sumerians? | no_statement | the first "beer" was not "brewed" by the sumerians.. the sumerians were not the first to "brew" "beer". | https://www.osc.org/science-of-beer/ | Science of Beer - Orlando Science Center | Science of Beer
What's New
Jan 16, 2023
SHARE
Making a delicious beer is more than just mixing barley, hops, water, and yeast!
The brew must undergo a series of biochemical reactions to convert barley to fermentable sugars. It also takes time for yeast to live and multiply so they can convert those sugars to alcohol.
Let's talk science.
Our friends at Deadwords Brewing recently taught us about some of the chemical reactions that take place during the brewing process. For example, the water that goes into their beer undergoes a process called reverse osmosis, which removes impurities, minerals, and just about everything else! This makes their starting point more consistent, so that all their batches of beer taste the same.
The reverse osmosis process also allows the Deadwords brewers to program specific levels of minerals, nitrates and nitrites, and other compounds in the water. They keep formulas for different water around the world, so they can program in the desired formula and replicate any water from throughout the world – or history! For example, the water used in Germany for brewing beer is different than the water used in China. If Deadwords wants to brew a German beer, they simply input the formula accordingly, and voila! German water for German beer!
Types of Beer
There are two overarching categories of beer: ale and lager. Ale is fermented at a higher temperature using a warmth-preferring yeast (Saccharomyces cerevisiae), which is called top fermentation. It has a more hoppy flavor to it and a higher alcohol content (4% - 5%). The fermentation and maturation processes are both relatively short. Stouts, porters, and IPAs are all created using variations of the ale brewing process.
Lager, on the other hand, is fermented at a lower temperature using cooler weather yeast (Saccaromyces pastorianus). This is known as bottom fermentation, and both this process and the maturation process for lagers is relatively longer than that of ales. They have less of the hops at the forefront, and a generally lower alcohol content (3.2% - 4%). Pilsners and dark lagers are also created using variations of this process.
A Little History
The first fermented beverage on record, known as kui, was brewed in China c. 7000 BCE. This concoction, however, was created using rice water. For this reason, most historians do not recognize the invention of beer until c. 4000 BCE in Mesopotamia (modern-day Iran), when the Sumerians created the first beverage using the brewing process we recognize today. Sumer is known as the first modern civilization, and the people loved beer so much, it was a daily dietary staple!
For much of history, beer was brewed primarily by women. In Sumer, beer was originally brewed by the priestesses of Ninkasi. Brewing then became part of meal preparation for the common woman. And the Sumerian women were not alone; as the practice of brewing spread across borders and empires, women continued to be the main brew masters in places like Egypt and the Celtic lands of Northern Europe.
During the Medieval period, monks mastered the art of brewing what they called “liquid bread” using a scientifically sequenced process. The beverage was created with an eye towards nutrition; the monks needed some sort of nutritious and good tasting beverage that they could drink during periods of fasting.
Though today's accepted process of brewing was created in Mesopotamia, the art was perfected by the Germans. Between 800 BCE and 1516 CE, the Germans slowly refined the brewing process, eventually instituting the Reinheitsgebot (purity law), which limited the accepted ingredients for beer to water, barley, hops and, later, yeast.
Raise a Glass to a Good Cause
Join Orlando Science Center on Saturday, January 21, 2023 for the second annual Science on Tap fundraiser. Tickets support our mission to inspire science learning for life. Stop by to learn more about the science behind your favorite brews, challenge your friends to a cornhole tournament, and sample beers from over 35 of Central Florida's favorite breweries!
Orlando Science Center is supported by United Arts of Central Florida, funded in part by Orange County Government through the Arts & Cultural Affairs Program, and sponsored in part by the State of Florida, Department of State, Division of Arts and Culture, the City of Orlando, and the Florida Council on Arts and Culture. Title VI of the Civil Rights Act of 1964 prohibits discrimination based on race, color, or national origin including limited English proficiency, in programs or activities receiving Federal financial assistance. Section 504 of the Rehabilitation Act of 1973 prohibits discrimination based on disability in programs or activities receiving Federal financial Assistance, and the Americans with Disabilities Act of 1990 prohibits discrimination against people with disabilities in public accommodations. To learn more, please contact our Section 504 / ADA / LEP Coordinator, Debra Gordon at [email protected].
A COPY OF THE OFFICIAL REGISTRATION AND FINANCIAL INFORMATION FOR ORLANDO SCIENCE CENTER, A FLORIDA-BASED NONPROFIT CORPORATION (REGISTRATION NO. CH2342), MAY BE OBTAINED FROM THE DIVISION OF CONSUMER SERVICES BY CALLING TOLL-FREE 1-800-HELP-FLA (435-7352) WITHIN THE STATE OR VISITING THEIR WEBSITE. REGISTRATION DOES NOT IMPLY ENDORSEMENT, APPROVAL, OR RECOMMENDATION BY THE STATE. | Pilsners and dark lagers are also created using variations of this process.
A Little History
The first fermented beverage on record, known as kui, was brewed in China c. 7000 BCE. This concoction, however, was created using rice water. For this reason, most historians do not recognize the invention of beer until c. 4000 BCE in Mesopotamia (modern-day Iran), when the Sumerians created the first beverage using the brewing process we recognize today. Sumer is known as the first modern civilization, and the people loved beer so much, it was a daily dietary staple!
For much of history, beer was brewed primarily by women. In Sumer, beer was originally brewed by the priestesses of Ninkasi. Brewing then became part of meal preparation for the common woman. And the Sumerian women were not alone; as the practice of brewing spread across borders and empires, women continued to be the main brew masters in places like Egypt and the Celtic lands of Northern Europe.
During the Medieval period, monks mastered the art of brewing what they called “liquid bread” using a scientifically sequenced process. The beverage was created with an eye towards nutrition; the monks needed some sort of nutritious and good tasting beverage that they could drink during periods of fasting.
Though today's accepted process of brewing was created in Mesopotamia, the art was perfected by the Germans. Between 800 BCE and 1516 CE, the Germans slowly refined the brewing process, eventually instituting the Reinheitsgebot (purity law), which limited the accepted ingredients for beer to water, barley, hops and, later, yeast.
Raise a Glass to a Good Cause
Join Orlando Science Center on Saturday, January 21, 2023 for the second annual Science on Tap fundraiser. Tickets support our mission to inspire science learning for life. | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://en.wikipedia.org/wiki/Sexual_revolution_in_1960s_United_States | Sexual revolution in 1960s United States - Wikipedia | The sexual revolution in the 1960s United States was a social and cultural movement that resulted in liberalized attitudes toward sex and morality. In the 1960s, social norms were changing as sex became more widely discussed in society. Erotic media, such as films, magazines, and books, became more popular and gained widespread attention across the country. These changes reveal that sex was entering the public domain, and sex rates, especially among young people, could no longer be ignored.[1]
With the introduction of the pill and second-wave feminism, women gained more control over their bodies and sexuality during the 1960s. Women could engage in sex without the risk of pregnancy.[2] At the same time, many women involved in the feminist movement questioned the traditional gender and sex roles ascribed to them. Women's liberation movements sought to free women from social and moral confines.[3]
Developments in the gay rights movement occurred during the same period, such as public demonstrations and protests to challenge discrimination against sexuality. Some activists began celebrating homosexuality, but the movement did not really take off until the Stonewall riots of 1969.[4]
In America, a dramatic shift in traditional ideas about sex and sexuality arose from a number of social changes. In 1969, Blue Movie, directed by Andy Warhol, premiered.[5] Films as early as the 1880s contained sexual images and some pornographic content was filmed in the 1920s, but Blue Movie was the first erotic film to gain wide theatrical release.[6][7] This film helped to introduce “porno chic” and the Golden Age of Porn (1969-1984). Pornography became a publicly discussed topic that was taken seriously by critics.[8][9]
Magazines depicting erotic and nude content increased in circulation at this time as well. After Playboy’s founding in 1953, the magazine was selling 1.1 million copies by 1960.[10] By 1970, it was circulating 5.4 million copies worldwide, and it peaked in circulation in 1972 with 7.16 million copies and “a quarter of all American male college students reportedly reading it in the 1970s.”[11] The first Playboy Club opened in Chicago in 1960, and members were served food and drink by Playboy Bunnies. Clubs were later opened in 23 other U.S. cities.[12] Writer and prominent feminist of the time, Gloria Steinem, went undercover at a Playboy Club in 1963 and found that women were often mistreated and exploited.[13]
In the 1960s, bans on erotic novels were challenged, and the standard of what should be banned was changed. In Grove Press, Inc. v. Gerstein (1964), the Grove Press published Henry Miller’s novel Tropic of Cancer, which was banned in the United States. The U.S. Supreme Court rejected the ban on the novel in a 5–4 vote because it was decided it had literary merit and was not “utterly without redeeming social value.”[14] In a similar case, Memoirs v. Massachusetts (1966), the Massachusetts Supreme Judicial Court decided that John Cleland’s The Life and Adventures of Miss Fanny Hill was obscene. The U.S. Supreme Court overturned that ruling in a 6–3 vote and stated that this book was also not “utterly without redeeming social value.”[15] This ruling made banning books with sexual content more difficult because any books with literary merit or social importance could no longer be considered obscene in the United States.[16]
Despite the changing social norms, it is unclear if rates of sex increased in revolutionary proportions during the 1960s. Daniel Scott Smith studied sex rates and saw increases in certain groups between 1940 and 1967. As John Levi Martin explains, “[Scott] concludes that members of the upper classes, whom the female-college-student surveys tend to study, change later than the rest of the population, and it is when they finally rejoin their lower-class counterparts in sexual mores and behavior that we suddenly believe a sexual revolution is upon us. This argument is supported by historical work that suggests that premarital sex – and not simply sex with a fiancé – was by no means uncommon among urban working-class women before the 1920s.”[17]
In addition, Phillips Cutright examined data about the age of first menses in the Western population and illegitimacy levels from 1940 to 1968, and he found that no changes of revolutionary proportions occurred. The only “substantial increases” were among young whites with their future husbands.[1] He determined that the age of the first menses in women decreased from 1940 to 1968 likely due to better nutrition, which suggests that earlier “low illegitimacy rates among young girls were due to biological factors as well as to the social controls depressing sexual activity.”[1] He suggests that “the myth of an abstinent past and promiscuous present is highly exaggerated.”[1]
Beginning in 1960, “The Pill” provided many women with an affordable way to avoid pregnancy. Before the pill was introduced many women did not look for long-term jobs because they would need to leave the job market when they became pregnant. Abortion was illegal and presented many health risks if performed. After birth control, a higher percentage of women graduated from school and college, which allowed them to later gain professional careers.[18]
With the invention of the pill, women could safely control their sexuality and fertility. Previous methods of birth control existed, including herbal remedies and early condoms, which were less protective and not legalized.[19] Birth control “was female-controlled, simple to use, highly effective, and most revolutionary of all, it separated reproduction and contraception from the sexual act.”[19] While critics claimed that the pill would lead to immorality, it allowed women to gain some freedoms in making choices about their bodies.[20]
The pill was originally endorsed by the government as a form of population control to counter overpopulation. President Lyndon Johnson’s social reform policy, The Great Society, aimed to eliminate poverty and racial injustice.[21] By 1960, the Food and Drug Administration had licensed the drug. 'The Pill', as it came to be known, was extraordinarily popular, and despite worries over possible side effects, by 1962, an estimated 1,187,000 women were using it.[22] Despite its popularity with women, the pill was still a controversial subject. In 1964, it was illegal in eight states, including Connecticut and New York.[23]
The pill was easier to obtain for married women, especially after Griswold v. Connecticut (1965). The U.S. Supreme Court sided with Estelle Griswold, the executive director of the Planned Parenthood League of Connecticut, and stated that the right to privacy for married couples was granted in the U.S. Constitution.[24] While this ruling made it easier for married women to obtain birth control, unmarried women who requested gynecological exams and oral contraceptives were often denied or lectured on sexual morality. Those women who were denied access to the pill often had to visit several doctors before one would prescribe it to them.[25] In 1972, the Supreme Court extended these rights to unmarried couples in Eisenstadt v. Baird.[26]
Criticisms of the pill developed among certain groups, Black populations in particular. The origin of the pill as a form of population control for those living in poverty created distrust among groups that were systematically impoverished.[27]Robert Chrisman argued that birth control could now be used as genocide with racist motives, saying “contraception, abortion, sterilization are now major weapons in the arsenal of the U.S.’ Agency for International Development.”[28] Attendees at the Black Power Conference in Newark, New Jersey also argued against birth control and feared it was a tool to limit Black power.[27]
These fears about the pill continued to develop through the decade, and even into the 1970s. The United States, especially the South, had a history of controlling Black fertility, first under slavery and later through sterilization.[27] In Dick Gregory’s cover story for the October 1971 edition of Ebony magazine, he wrote, “back in the days of slavery, Black folks couldn't grow kids fast enough for white folks to harvest. Now that we've got a little taste of power, white folks want to call a moratorium on having children."[27] Still, a number of Black women chose to take the pill because they desired control over their fertility.
In 1969, journalist Barbara Seaman published The Doctors’ Case Against the Pill, which outlined a number of side effects. She provided evidence for “the risk of blood clots, heart attack, stroke, depression, weight gain, and loss of libido.” Her book would lead to congressional hearings about the safety of the pill in the 1970s.[23]
Second-wave feminism developed in the 1960s and 1970s, demanding equal opportunities and rights for women. The feminist and women's liberation movements helped change ideas about women and their sexuality.[29] In The Feminine Mystique, Betty Friedan discussed the domestic role of women in 1960s America and the feeling of dissatisfaction with that role. Friedan suggested that women should not conform to this popularized view of the feminine as “The Housewife” and that they should participate in and enjoy the act of sex.[30]
However, despite second-wave feminists sometimes being considered “anti-sex,” many women were interested in liberating women from certain sexual constraints. The women's liberation movement prioritized “its cultural challenge not to unjust laws but to the very definitions of female and male, the entire system then called ‘sex roles’ by sociologists.”[29]
Homosexuality was still considered a developmental maladjustment by medical establishments throughout the 1950s and 1960s.[32] Prejudices against homosexual behavior were cloaked in the language of medical authority, and homosexuals were unable to argue for the same legal and social rights.[33]
Homosexuals were sometimes characterized as dangerous and predatory deviants. For example, the Florida Legislative Investigation Committee, between 1956 and 1965, sought out these 'deviants' within the public system, with a particular focus upon teachers.[34] The persecution of gay teachers was driven by the popular belief that homosexuals could prey on vulnerable young people and recruit them into homosexuality. In addition, male homosexuals were often seen as inherently more dangerous (particularly to children) than lesbians, due to stereotypes and societal prejudices.[34]
In addition, most states had sodomy laws, which made anal sex a crime. It was punishable by up to 10 years in prison.[35] However, by 1971, the first gay pornographic feature film, Boys in the Sand, was shown at the 55th Street Playhouse in New York City. With this movie, the gay community was launched into the sexual revolution and the porn industry.[36] Earlier homoerotic films existed, especially in Europe, as early as 1908. These films were underground and sold in discreet channels.[37]
The gay rights movement was less popular in the 1960s than later decades, but it still engaged in public protest and an attitude “celebratory about the homosexual lifestyle.”[4] The Mattachine Societies in Washington, D.C. and New York staged demonstrations that protested discrimination against homosexuals. These groups argued “that the closing of gay bars was a denial of the right to free assembly and that the criminalization of homosexuality was a denial of the ‘right to the pursuit of happiness.’”[4] In 1969, the United States had fifty gay and lesbian organizations that engaged in public protest.[4]
These gay rights groups also challenged traditional gender roles, similar to feminist movements of the time. The Mattachine leaders emphasized that homosexual oppression required strict definitions of gender behavior. Social roles equated “male, masculine, man only with husband and father” and equated “female, feminine, women only with wife and mother.”[38] These activists saw homosexual women and men as victims of a “language and culture that did not admit the existence of a homosexual minority.”[38] The homophile movement and gay rights activist fought for an expansion of rights based on similar theories that drove some heterosexual women to reject traditional sexual norms.
In the early morning of June 28, 1969, police raided the Stonewall Inn, the most popular gay bar in New York City, located in the city's Greenwich Village neighborhood. The police asked for identification from patrons of the bar; asked to verify the sex of cross-dressers, drag queens and trans people; and assaulted lesbian women when frisking them.[39] As they took people out of the club, a scuffle began between a woman and police officers, and it quickly dissolved into a riot. Protests continued into the next day. The Stonewall riots are considered a defining moment in the gay rights movement and have become a “‘year zero’ in public consciousness and historical memory.”[4]
The Stonewall riots of 1969 marked an increase in public awareness of gay rights campaigns, and it increased the willingness of homosexuals across America to join groups and campaign for rights.[4] However, it would be misleading to conclude that resistance to homosexual oppression began or ended with Stonewall. David Allyn argues that numerous acts of small-scale resistance are necessary for large political movements, and the years preceding Stonewall played a role in creating the gay liberation movement.[40]
The Stonewall riots are a pivotal moment in gay rights history because they enabled many members of the gay community to identify with the struggle for gay rights.[41] Gay life after Stonewall was just as varied and complex as it was before. Still, the development of the Gay Liberation Front in 1969 sought “to create new ‘social form and relations’ that would be based on ‘brotherhood, cooperation, human love, and uninhibited sexuality.”[4] | The sexual revolution in the 1960s United States was a social and cultural movement that resulted in liberalized attitudes toward sex and morality. In the 1960s, social norms were changing as sex became more widely discussed in society. Erotic media, such as films, magazines, and books, became more popular and gained widespread attention across the country. These changes reveal that sex was entering the public domain, and sex rates, especially among young people, could no longer be ignored.[1]
With the introduction of the pill and second-wave feminism, women gained more control over their bodies and sexuality during the 1960s. Women could engage in sex without the risk of pregnancy.[2] At the same time, many women involved in the feminist movement questioned the traditional gender and sex roles ascribed to them. Women's liberation movements sought to free women from social and moral confines.[3]
Developments in the gay rights movement occurred during the same period, such as public demonstrations and protests to challenge discrimination against sexuality. Some activists began celebrating homosexuality, but the movement did not really take off until the Stonewall riots of 1969.[4]
In America, a dramatic shift in traditional ideas about sex and sexuality arose from a number of social changes. In 1969, Blue Movie, directed by Andy Warhol, premiered.[5] Films as early as the 1880s contained sexual images and some pornographic content was filmed in the 1920s, but Blue Movie was the first erotic film to gain wide theatrical release.[6][7] This film helped to introduce “porno chic” and the Golden Age of Porn (1969-1984). Pornography became a publicly discussed topic that was taken seriously by critics.[8][9]
Magazines depicting erotic and nude content increased in circulation at this time as well. After Playboy’s founding in 1953, the magazine was selling 1.1 million copies by 1960.[10] | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://academic.oup.com/tcbh/article/34/2/354/7110241 | Political Sexual Revolution: Sexual Autonomy in the British Women's ... | A Political Sexual Revolution: Sexual Autonomy in the British Women’s Liberation Movement in the 1970s and 1980s
[email protected]. Many thanks to Chris Hilliard for his insightful feedback. I am also very grateful to Florence Sutcliffe-Braithwaite for asking astute questions on an early draft, together with the anonymous reviewers whose criticisms and suggestions considerably improved this article.
Cite
Emma Wallhead, A Political Sexual Revolution: Sexual Autonomy in the British Women’s Liberation Movement in the 1970s and 1980s, Twentieth Century British History, Volume 34, Issue 2, June 2023, Pages 354–376, https://doi.org/10.1093/tcbh/hwad026
Abstract
In the 1970s and 1980s, women across Britain—particularly those in the Women’s Liberation Movement (WLM)—took part in a distinct sexual revolution fuelled by a very specific question—who gets to determine the ways in which I am sexual? The active engagement by women with this question of sexual selfhood belies a historiography of sexual revolution—real or imagined—in which women were the passive beneficiaries (or victims) of technological, cultural, religious, social and/or, economic shifts. Drawing on the writing of women in the feminist press, mainstream media, books, and pamphlets, this article describes the specific contribution of the WLM to shaping new possibilities for a sexuality defined, and controlled, by women. I argue that the WLM combined a powerful political framework with an influential social network to significantly contribute to a far-reaching process of deconstructing and recasting female sexuality and sexual relations.
‘Not long ago I made the intellectual decision to become bisexual’, 25-year-old London resident Linda told Guardian reporter Lindsay Mackie in 1973.1 Separated from her husband and in love with a man at the time of her interview, Linda continued: ‘I am in a bit of a state of flux with this new-found independence but I feel amazingly self-contained.’ For Linda, this independence was as much physical as anything else. ‘I come back home and I feel the outlines of myself’, she explained. I‘m not living through anyone else and almost every day I say to myself “I’m me and I’ll never be anything else and I’m satisfied with it.”’ Linda’s description of her choice as an ‘intellectual decision’ is telling. From the late 1960s, British women started a conversation about their bodies and sex as part of a broader, self-conscious exploration of autonomy.2
Linda had attended her first women’s group meeting in 1972. It was the start of an involvement with the Women’s Liberation Movement (WLM) that would alter her perspective and ultimately her life. ‘I see everything now in a political way’, she explained. More than a call for collective action, however, Linda’s political perspective was a call to reconsider the way in which she conceived of and lived her life as an individual, including how she lived her life as a sexual being. Florence Sutcliffe-Braithwaite and Natalie Thomlinson point to a distinction between the ‘typical concerns of post-1968 feminism’ and the vernacular understandings of British working-class women in ‘individuality, autonomy and voice for women’.3 However, it was not so much that women in the WLM were not interested in ‘individuality, autonomy and voice’. Rather, the context for that interest was different. The assertion of sexual autonomy by women of the WLM was underpinned by a political critique rejecting sexual norms that defined female sexuality in terms of men’s interests. This article considers women like Linda who, as part of their engagement with the WLM, began to think, and talk, in terms of ‘a sexuality that is autonomous from men’s interest’, not just as an intellectual exercise but as a way of living.4 While the political framework on which this work was built led women in the WLM down some different pathways to those of other women, the research for this article has also demonstrated that there were many areas where the expression of autonomy ultimately looked the same. The WLM both reflected and contributed to broader changes in which women, more generally, considered and expressed their sexual selves.
On one level, there is nothing new about associating the terms ‘sexual revolution’ and ‘sexual freedoms’ with British women of the late twentieth century. Women are at the heart of many accounts of sexual revolution. Callum Brown links increasing pre-marital sexual activity by women with religious decline from the 1950s.5 Alana Harris, writing about contraception, and David Geiringer, writing about sexual freedoms, both illuminate how women were simultaneously the topic of, and absent from, religious discourse about sex.6 Sexual images of women evidence a ‘sexualisation’ of culture for Marcus Collins and Ben Mechen.7 Of course, the most popular way in which women are brought into a landscape of sexual revolution is by reference to the unlocking of women’s sexual freedoms with the increasing availability of the contraceptive pill. Hera Cook observes that the increasing availability of reliable contraception enabled women to ‘[move] towards autonomous sexual activity’ as it was possible for those women who could access the pill ‘to have sexual relations without becoming pregnant and/or marrying’.8 But the extent of these freedoms has been contested with some significantly discounting their consequence. These commentators have asserted that, rather than freedom, women ultimately became victims of greater pressure to have sex. For Sheila Jeffreys, as women were co-opted into male notions of sexuality, the ‘sexual revolution completed the sexualisation of women’.9 Dominic Sandbrook suggests that the sexual revolution, together with access to the pill, might have enabled women to have more sex but, rather than marking greater autonomy, women merely felt more pressure to have sex with men.10
What all of these accounts have in common is a focus on the impact on women of various changes to the political, religious, cultural, and economic environment that, in the words of Callum Brown, tends ‘to downplay the significance of the popular sexual revolution’.11 Such accounts obscure the nature and extent of a far-reaching and influential intellectual engagement with sexual self-determination by women such as Linda from the early 1970s. While it is true that many people were concerned about the impact on women of an increasingly sexualized environment, this says little about the way in which women, themselves, thought about living a sexual life. While the freedom to have sex with men without fear of pregnancy certainly contributed to an environment in which women could live an autonomous sexual life, the freedom to have sex with men was not, in itself, the achievement of an autonomous sexual life.
This article provides a new account of sexual revolution as a series of changes driven by women brought about by an all-encompassing claim to sexual autonomy and sexual self-determination. It was a revolution facilitated by a convergence of moments and shifts such as increased labour market participation by women, growing numbers of women enrolling in higher education, abortion law reform, religious decline, increasing cultural permissiveness and, of course, advances in contraceptive technology. However, the WLM claim for sexual autonomy was not simply an inevitable outcome of these various trajectories and to position it in that way misrepresents this historical moment, detracting from its contribution in shaping social and intellectual history. This was not a case where women were swept along on an inexorable wave. They consciously and actively engaged in a very specific and new exploration of the meaning and expression of bodily autonomy, including the right to determine and own their own sexual lives. Rather than passively accepting dominant narratives of women’s sexuality, women of the WLM intellectually tackled a fundamental question—who determines the ways in which I am sexual?—and they posed this question publicly and in large numbers.
None of this is to suggest that the WLM was monolithic. While it is difficult to quantify its extent,12 the WLM was a national movement with which thousands of women variously engaged, whether as full-time activists, as ad hoc participants or as women engaging only through access to feminist publications such as Spare Rib or even via exposure through mainstream publications.13 However, the WLM was also diverse in its composition.14 Not all women in the WLM agreed on all matters—with some significant and serious disagreements being a notable element of the movement. Nevertheless, the movement was grounded on a fundamental interest in the position of women, specifically the oppression of women, across all areas of life.15 This was the political framework that defined the WLM and it was this interest that led, for many feminists, to an active engagement with understanding, and challenging, that oppression as it applied to their sexual lives. It was an interest that commonly centred on a ‘critique of patriarchal heterosexuality’.16
For Hannah Charnock, the sexual behaviour of teenage girls of the late twentieth century was significantly shaped by peer networks. Charnock argues ‘that sexuality needs to be understood as a social phenomenon that was shaped by and performed through individuals’ relationships with their local communities and immediate social networks’.17 Similarly, beyond providing a political framework that demanded the examination of sexual autonomy, the WLM was also a significant social network that both challenged, and supported, women to undertake the often confronting work of conscious self-examination necessary to an autonomous sexual life. A movement of thousands of women, the WLM provided both opportunity and support for a wide-ranging discourse on sexual autonomy. Women’s sexuality had previously been primarily discussed by men albeit interrupted by singular women who championed sexual autonomy such as the women associated with The Freewoman journal,18 but now large numbers of women were participating in a new conversation about women’s sexual autonomy. Not unlike Charnock’s teenagers, privacy still mattered to the women of the WLM but it was not the overriding concern it had been and, contrary to the suggestion by Hera Cook that women were reluctant to speak openly about sex in the 1970s, large numbers of women—both in the WLM and more broadly—were, in fact, talking explicitly and openly about sex across the decade.19 This article draws heavily on the archive of that discourse—through pamphlets and documents of the WLM, through publications such as Spare Rib and Shrew, and through feminist contributions in the mainstream press. In doing so, it is necessary to acknowledge the contribution of transnational movements, particularly the WLM in the USA, that had significant influence on the women in the British WLM.20 Women in the British WLM voraciously read and circulated material on sexual autonomy written by women from the WLM in the USA who were asking similar questions.21
This article starts by examining prevalent representations of the WLM’s engagement with sexuality, including the well-trodden narrative of the futility of the WLM’s efforts to revolutionize sex. It is proposed that, despite bitter debate, the WLM provided impetus—both as a political framework and as a social network—for women from all factions to examine the place, and expression, of sex in their lives in a way that was distinguished from the ways in which women had previously related to sex and the expression of their sexual lives. This work included an appropriation of the discourses of sexologists and others to conceptualize a sexuality in which women had control of their bodies and responsibility for their sexual experiences. These tentative steps into autonomous sexual expression represented a significant challenge to sexual norms within heterosexual relationships. Part two of this article examines the issues that arose for women when considering incorporating new ways of thinking about sex into their own intimate relationships with men. While some successfully made changes, others saw it as an impossible task and some women felt that the only way that they could take control of their sexual lives would be to live those lives without men. Part three of this article looks at the rise of political lesbianism as a response to the challenges of sexual liberation within a heterosexual paradigm. However, there were also many women—such as Linda—who felt able to explore new ways of being intimate with a wider range of people. In the final part, the article examines celibacy in the context of sexual autonomy and its importance to many in strengthening their relationships with others.
A Self-defined Sexuality
In July 1974, more than 900 women gathered in Edinburgh for the sixth National WLM Conference.22 At this conference, the following was added to the list of demands of the WLM: ‘We demand an end to discrimination against lesbians; and the right to a self-defined sexuality for all women’. At a subsequent 1978 Conference, at the recommendation of the Brighton Women’s Liberation Group, the demand was split into two parts with the latter part—the right to a self-defined sexuality for all women—repositioned as a preface to all demands.23 In their proposal to split the demand, the Brighton Women’s Liberation Group argued not only that the demand for an end to discrimination against lesbians was distinct and should stand alone but that ‘a women-defined sexuality is not a demand but rather a basic principle/premise/assumption underlying the ideology of the WLM’.24 The statement has, however, had a fraught history. It was not well understood at the time and its significance has, subsequently, been obscured in the historiography. In their 1982 reflection on the WLM, Anna Coote and Beatrix Campbell wrote that, despite the demand being subsequently split into two, it continued to be seen primarily as a demand about lesbians and that women—lesbian and otherwise—struggled to identify with the demand beyond its more evident call for ‘a commitment to lesbians’ civil rights’.25 In a more recent account of the 1978 Conference, Jeska Rees similarly highlights a sense of confusion surrounding the sixth demand. ‘The status of the new, slimmed down sixth Demand was uncertain in the months following the conference … And the status of the new non-Demand—the right to a self-defined sexuality—was even less clear.’26 For Coote and Campbell, in the confusion and controversy that surrounded the sixth demand, an opportunity had been lost for the movement to clearly engage with ‘a positive commitment to female eroticism, as something powerful and autonomous, which was shared by heterosexuals, lesbians and bisexuals …. It would be what women wanted it to be, not what men decreed’.27 Yet it should not be interpreted that because this particular opportunity had been lost, that women across the WLM did not meaningfully engage with sexual self-determination.
In an oral history recorded as part of the Sisterhood and After Research Team, Jo Robinson recalls: ‘There wasn’t any defined sexual politics’.28 It is clear, however, that there was a common theme that women were equally grappling with. This theme was self-determination. Across the board, women were examining their bodies and sexuality as sites of politics and patriarchy. If the personal was political, a woman’s body and sexuality were going to be primary sites to examine this relationship.29 This examination came on the heels of a critique of a cultural sexual revolution that was seen as having favoured the interests of men. The critique was most famously articulated in Kate Millett’s literary study of female sexuality, Sexual Politics, which was published in the USA in 1970 and in the UK in 1971.30 In Sexual Politics, described by one reviewer as ‘a book that [analysed] revolution in order to serve revolution’, Millett made it clear that, rather than liberating women, sexual revolution had firmly maintained the subordination of women.31 Focusing on writers who, for many, represented sexual liberation—Lawrence, Miller, and Mailer—Millett drew attention to a sexual revolution that maintained or even strengthened men’s power over women, including the definition of female sexuality by men in male interests. Noting that a woman’s ‘sexuality is very subject to social forces’, Millett contended ‘that the conditions of patriarchal society have had such profound effects upon female sexuality that its function has been drastically affected, its true character long distorted and long unknown’.32
In the same year that Millett first published Sexual Politics, Germaine Greer published The Female Eunuch which, despite major differences in style and substance, also drew on literary texts to illustrate that ‘the female is considered as a sexual object for the use and appreciation of other sexual beings, men’.33 Greer likened the effect of this as a form of castration that erodes a woman’s energy which she defined as ‘the power that drives every human being’.34 In this, like Millett, the significance of sexual subordination for all areas of a woman’s life was drawn out by Greer. Similarly, feminist Eva Figes held up sex as a symbolic site for ubiquitous patriarchy: ‘The sex act is an effective symbol because it is so basic and animal, and can be considered “natural” – so if it is “natural” for a man to lie on top of a woman it would therefore follow that male domination is also part of the natural order.’35 In this natural order, women were innately passive while men were aggressive. In her first book, Sex, Gender and Society published in 1972, sociologist Ann Oakley observed: ‘The female’s sexuality is supposed to lie in her receptiveness and this is not just a matter of her open vagina: it extends to the whole structure of feminine personality as dependent, passive, unaggressive and submissive.’36 If, as argued by Beatrix Campbell, a woman ‘only experiences sexuality as defined by men in a male-dominated culture’, what could be done to place a woman’s sexuality back into the hands of the woman?37 One of the forms which this work took was to appropriate and recast key discourses about women’s sexuality, including appropriation and recasting of psychoanalytic and other ‘expert’ discourses about female sexual pleasure.
Putting Autonomy into Practice
Female sexual pleasure was not a new concept. It was of particular interest to sexologists, psychoanalysts and others from the early part of the twentieth century.38 Marie Stopes, one of the most frequently cited proponents of female sexual pleasure, identified a ‘physical yearning’ or ‘creative impulse’ in women that was ‘a physical, a physiological state of stimulation which arises spontaneously and quite apart from any particular man’.39 Some scholars, such as Hera Cook, position Stopes as an early proponent of female sexual autonomy.40 However, Stopes made it clear that what ‘stimulated’ and gave force to a woman’s sexual desire was her relationship with ‘some particular man’.41 This reliance on male partners in heterosexual sexual relations and, more specifically, married heterosexual relations is seen by many as undermining Stopes’ legacy. Margaret Jackson, for instance, has written: ‘Female sexuality cannot be simultaneously autonomous and dependent on men for its expression and fulfilment.’42
Beyond reliance on the stimulation of a male sexual partner to stir a woman’s ‘creative force’, Stopes and her contemporaries also located control of, and responsibility for, women’s sexual experiences with men. ‘I feel sure’, Stopes asserted, ‘that the prevalent failure on the part of many men to effect orgasms for their wives at each congress, must be a very common source of the sleeplessness and nervous diseases of so many married women’.43 Women were not expected to take an active role in the sexual relationship and, indeed, were reluctant to do so.44 Reflecting on the oral testimonies of close to 200 individuals who grew up in the first half of the twentieth century, Kate Fisher observes: ‘It was important for wives to maintain a passive sexual identity in which they received the attentions of husbands but did not themselves play any active role in initiating sexual activity.’45 It was largely accepted by women that the male would initiate intimacy and educate and guide them in sexual practices.46
While the advice in books and magazines placed female experience in the foreground, the sexual satisfaction of a woman was generally framed as both subordinate to and consequent to that of her male partner. ‘[A]ct lovingly’, advised columnist Evelyn Home in 1967, ‘even if desire is lacking: to want to give pleasure will restore your own desire all the more quickly’.47 This framing was not just a matter of discourse. Simon Szreter and Kate Fisher found that women connected sexual pleasure with the ‘the giving of love’ in their private lives.48 Their female interviewees ‘presented [sexual pleasure] as a happy by-product of a loving and caring act’ rather than as an end in itself.49
The WLM started from the premise that women are oppressed by men and called on women to think critically about their position and make their own decisions, independently of men.50 This manifesto extended to female sexuality. The WLM also called on women to understand how their sexuality had been defined by, and subordinated to, men and challenged women to take charge of their sexuality. In a pamphlet reproduced in Shrew in 1972, Angela Hamblin declared: ‘The distortion and mutilation of female sexuality is achieved through defining it exclusively in terms of its complementarity to men’s, and never in its own right. Male sexuality is defined as the “given” and female sexuality is then defined in relation to it.’51 The first step was to recognise that nexus. Pat Whiting called on women to discard what she saw as a ‘cultural myth that women but not men need “romantic love” before they can respond sexually’.52 ‘Historically’, she said, ‘women have suffered from too much romance and not enough realism’. For Whiting, the work of Masters and Johnson provided an effective dose of realism.53 Based on observations of approximately 10,000 sex acts, Williams Masters and Virginia Johnson provided, what many people—including Pat Whiting—viewed as ‘hard scientific fact’ about female sexuality.54 According to Whiting, this evidence ‘exploded … the myth that women have to be “in love” to enjoy sex’. For the WLM, however, it was the findings of Masters and Johnson that the female orgasm was a product of clitoral stimulation that was of greatest interest. These findings debunked Freud’s proposition that, for a woman to have successfully matured, her ‘erotogenic susceptibility to stimulation has been successfully transferred … from the clitoris to the vaginal orifice’ during puberty. If this process failed to take place, said Freud, a woman would remain prone ‘to neurosis and especially to hysteria’.55 The research of Masters and Johnson, which was seen to be the first scientific account of clitoral orgasm and undermined Freud’s theory, was quickly co-opted by feminists, most notably by Anne Koedt.
In 1968, Koedt, a member of the New York Radical Women, presented a paper at the women’s liberation conference in Chicago titled ‘The myth of the vaginal orgasm’.56 Koedt recapitulated the physiology of the orgasm: ‘There is only one area for sexual climax, although there are many areas for sexual arousal—the clitoris. All orgasms are extensions of sensations from this area.’ Koedt maintained that this meant that women’s sexual pleasure was not reliant on penetration. It also meant that, in focusing only on penetration, women’s sexual pleasure would necessarily be overlooked. Koedt made a call to action for women to ‘demand’ a new approach: ‘What we must do is redefine our sexuality.’ While Koedt had repackaged existing information, for many women it was their first exposure to the physiology of female sexual pleasure and the pamphlet was a revelation.57 Her paper, reproduced in Notes from the Second Year: Women’s Liberation: Major Writings of the Radical Feminists in 1970, soon spread to women’s liberation groups across the western world, including Britain.58 British feminist Beatrix Campbell found the paper ‘absolutely life-changing’: ‘The story was telling me about my sexual life, it detonated it, it was a detonator.’59 For Campbell, who had believed that her inability to enjoy heterosexual sex was her own fault, the insight about the source of female orgasm posed an ‘enormous challenge’ to everything she had understood and believed.
Koedt was, however, just one voice in an emerging conversation in which women talked to other women about their lives in a way that sought to give new perspective to those lives. Conciousness-raising groups formed an important part of this conversation. These were small groups of women who gathered regularly, often in the homes of members, to share and discuss their personal experiences. In this way, the political aspects of those experiences could be understood and foregrounded.60 Taking back control of their bodies and sexuality formed an important part of these sessions for many women including an examination of the most intimate of details about relationships, bodies, and sex. Looking back at her experiences in the WLM, Sue Bruley describes how, in her group, ‘[w]e got down to the mechanics of sex and related our earliest sexual encounters and how men behaved in sex’.61 The feminist press was another form of conversation between women and provided women with new perspectives on their sexuality. A primary text for many women was Our Bodies, Ourselves, by the Boston Women’s Health Book Collective in 1971.62 A British version was published by Penguin in 1978.63 The book encouraged women to learn about their bodies and to explore their own sexuality. Women were guided in self-examination techniques to explore and understand their sexual organs. While providing instruction on a process that women could carry out at home, the British edition encouraged women to carry out the self-examination ‘in a group where you can discuss and compare what you see with other women, and break down taboos about touching yourself and looking at each other into the bargain’.64
Newly established feminist publishers such as Virago, Onlywomen Press and Sheba also contributed to the books on female sexual autonomy. One of these was The Body Electric, written by British sex therapist and journalist Anne Hooper and published by Virago in 1980.65 Hooper had been inspired by workshops run by Betty Dodson in the USA and established the ‘London Pre-Orgasmic Workshop’ in partnership with Eleanor Stephens.66The Body Electric, based on that workshop, called on women to learn about their sexuality, be comfortable with their sexuality, and to be able to be responsible for their sexual lives. According to Hooper it was ‘vital to forget about your partner’s needs … and concentrate wholeheartedly on your own’ when someone was ‘trying to bring you to orgasm’. Acknowledging that it might feel selfish, she reassured women ‘that [this] ultimately is what sex is all about, where you disappear into an inner world of pure sensual adventure’.67 Yet for most women there was a need at some point to consider and communicate with partners, often men.
Autonomy in Relationships
Women were not looking for an autonomy that ‘promotes the sort of independence that involves disconnection from closer interpersonal involvement with others’.68 Rather, they sought to exercise autonomy within the context of their social relationships with others, including men. Open communication in marriage had been increasingly valued from the 1960s and, throughout the 1970s, many women sought to express their sexual needs more openly with their partners.69 There were, however, many women who still felt reluctant to raise issues. Beatrix Campbell was frank about the apprehension she felt when thinking about raising Koedt’s paper with her partner, Bobby. Reluctant to acknowledge her sexual dissatisfaction with Bobby, Campbell admitted that she never had the conversation with Bobby—‘I didn’t dare’.70
For many women, however, the WLM provided both the political framework and social support to tackle the issue in their relationships. Interviewed in 1971 by the Notting Hill Women’s Liberation Workshop group for the September issue of Shrew, a 25-year-old sales representative—Ros—said that she had previously been reluctant to discuss sex with a man.71 ‘I was brought up to think that the worst thing you could do to a man was to criticize his sexual technique.’ Prompted by her involvement with the WLM, Ros had decided ‘to be far more honest about sex and not be afraid to talk about it’. For some, the price of preserving the feelings of their partners at the expense of their own was too great. Responding angrily to the suggestion by Jacqueline Brandwynne in Cosmopolitan that women should not be honest about their experience of sex, Jill Marshall of Sussex asked: ‘How are women ever to discover and enjoy their own sexuality and achieve sexual equality when we are told to preserve the male ego whatever the cost to the relationship?’72 For those who were discussing sex with their partners, there was an increasing sense of male sexual insecurity and the ‘coaching’ by women was not always well received. In a 1981 issue of Spare Rib, Angela Hamblin quoted a woman for whom the discussion had been drawn out: ‘I know at first I said the same thing over and over again to B and he just didn’t hear me and then eventually he did … he started to hear … but even now there are things he doesn’t hear but I know in time he will … .’73 Two years later, another woman independently told Hamblin: ‘At first, he reacted by not understanding what I was saying; by becoming celibate for months on end; by becoming self-punitive and self-destructive; by being constantly perplexed and “trying-to-do-it-better”.’74 One of the common themes that emerged from Hamblin’s survey was the degree of effort and persistence that was needed, often running to many years, to make change in intimate relations. For many women, however, there was a prior question of what autonomy in sexual relations looked like. To communicate effectively in the first place, you needed to know what you wanted.
Redefining Sexual Pleasure
‘Orgasms were never going to be enough’, said Lynne Segal, ‘however autonomously we might control them’.75 Koedt’s pamphlet left many women with an ‘unease about such a mechanical approach to sex’ or troubled by the focus on orgasm.76 Very early on, Germaine Greer had expressed concern about ‘the substitution of the clitoral spasm for genuine gratification’ and a focus on the genitals—to be stimulated mechanically—rather than a focus on sexuality and the person as a whole.77 Similarly many were keen to emphasize a broader picture of sexuality that was not only less focused on penetration but also less focussed on orgasm.78 Reflecting on a conversation with friends in 1981, Angela Hamblin recalled:
We wanted to free ourselves from the limitations which patriarchal definitions had placed upon our sexuality. We wanted to discover more about our own bodies, responses and needs and to explore more open, less goal-oriented, forms of sexual pleasure.79
Reporting on the results of a survey of Spare Rib readers in 1983, Hamblin reported that, ‘in total contrast to the dull repetitiveness of the heterosexual ritual’, respondents focussed on ‘sensuous contact’ and ‘exploration’, as well as emphasizing a slowing down of sex.80 One respondent favoured ‘[s]pending a long time making love—slowly and sensuously without a mad dash for orgasm’. Others were more specific about the emotional content. One respondent mused: ‘What do I want? Affection, generosity, respect, excitement, exploration (of bodies, of emotions), the chance to develop, the chance to direct and control what’s happening.’ Lynne Segal wanted to explore the eroticism of power suggesting that ‘it is not necessarily orgasms that we are deprived of, but more likely any possible sexual scenarios for exploring and enjoying the contradictory tensions of erotic desire—dependence and strength, control and passivity, love and hate—in any playful, yet intense and pleasurable way’.81 For many women, however, power dynamics proved problematic rather than playful as they butted up against political theory. Some women felt uncomfortable if they responded—even in fantasies—to subordination. In her recent autobiography, Sheila Rowbotham shared her experience of this struggle: ‘David and I tried changing stereotypical male and female roles. I kept tabs on the pleasure I could feel in passivity and abandonment, wondering whether these indicated a furtive desire for coital subordination. But marking erotic responses did not eradicate them. They just sat tight in some remote corner of my being.’82 Rowbotham wasn’t alone. In an article written for The Observer in 1984, Minette Marrin reported that large numbers of women were similarly grappling with this difficulty of reconciling arousal through submissiveness with feminist emancipation.83 For Lynne Segal, this contradiction was a reflection of seeing sex as necessarily binary with an active, dominant participant and a passive, subordinate participant.84 She suggested that women should acknowledge and embrace the fact that sex involves vulnerability on the part of all participants, including men. While acknowledging that ‘no feminist can ignore the symbolism of “the sex act”, nor many men’s psychic compulsion, combined with their physical and/or social power, to coerce women into it’, Segal exhorted women to look beyond ‘conventional narratives of sexuality and gender difference’.85 For Segal, autonomy was the right to choose: ‘Every time women enjoy sex with men, confident in the knowledge that this, just this, is what we want, and how we want it, I would suggest, we are already confounding the cultural and political meanings given to heterosexuality in dominant sexual discourses. There “sex” is something “done” by active men to passive women, not something women do.’86 But to some women, there was only one possible power dynamic in a heterosexual relationship—the subordination of woman to a male oppressor. For that reason, they suggested, the only option for autonomy was for women to eschew relationships with men.
At the WLM Conference in London in 1977, Sheila Jeffreys held a workshop that established the Revolutionary Feminism network.87 Revolutionary Feminism posited that ‘men are the enemy’ who ‘colonise’ women through penetration.88 They charged that ‘every woman who engages in penetration bolsters the oppressor and reinforces the class power of men’.89 But avoiding penetration did not absolve a woman engaging in sex with a man from her culpability. According to the Leeds Revolutionary Feminists (LRF), all sex with a male was a form of acquiescing to, sanctioning, male power. ‘There is no such thing as “pure” sexual pleasure’, stated the LRF’.90 This was a stance that, in the words of Beatrix Campbell, ‘equated autonomy with separatism’.91 If sex with men was inherently problematic, then perhaps men should be taken out of the equation altogether.
The Coming out of the Political Lesbian
Political lesbianism, to use the American term, was the logical evolution of Revolutionary Feminism.92 It was a conscious rejection of oppression through the sexual act. Being lesbian was simply ‘better’ and was something that could be, and should be, chosen. The LRF elaborated: ‘The advantages include the pleasure of knowing that you are not directly servicing men, living without the strain of glaring contradiction in your personal life, uniting the personal and the political, loving and putting your energies into those you are fighting alongside rather than those you are fighting against and the possibility of greater trust, honesty and directness in your communication with women.’93 But what of desire? Elizabeth Wilson located political lesbianism in behaviourism that posited that ‘not only could you learn to have orgasms, you could also learn to respond sexually to women’.94 This, argued feminist Frankie Rickford, undermined the work that had been undertaken to revolutionize the sexuality and sexual lives of women. In a letter to WIRES, Rickford wrote: ‘The tragedy is that women’s tentative attempts to explore and reveal and challenge standard sexual practice have been killed stone dead by the two commandments that if you do it with women you’re OK and if you do it with men, you’re out.’95 For the LRF, however, the level of sexual desire for other women was less important than the rejection of sex with men. ‘Our definition of a political lesbian’, they claimed, ‘is a woman-identified woman who does not fuck men’.96 In other words, you could be a political lesbian and not have sex with another woman.
Many feminists were to identify as ‘political lesbians’ but the degree to which their sexual lives involved an authentic desire for women and the extent to which they engaged in sexual practices with other women is opaque. However, it is clear that, political or otherwise, increasing numbers of women were choosing to identify as lesbian and share their lives with other women. Indeed, from the late 1960s, there was an increasing visibility of lesbians both in the WLM and across the general population. In 1978, the British edition of Our Bodies, Ourselves attributed a ‘freer expression of lesbianism’ to the WLM.97 For others, the WLM had a more direct impact in the transformation of their emotional and sexual lives.98 In an article by Sue Cartledge and Susan Hemmings in Spare Rib examining the reason why women became lesbian, one woman recalled that she noticed an attraction to women when attending the women’s liberation conferences. One woman they had spoken to about becoming lesbian explained: ‘I suppose my life had become more and more involved with women, and they seemed far more interesting than my husband, or any men, more exciting, more glamorous.’99 Cartledge and Hemmings added that, glamour aside, many women had developed strong friendships with other women in the WLM through spending a lot of time with them and that sexual relationships with other women represented ‘an extension of the closeness that grows up between women working together politically, and depending on each other, as well as enjoying more relaxed times’.100
Some women found themselves exploring sex with men and with women. Deborah Gregory, who had researched women’s bisexuality at the Third London Regional Women’s Liberation Conference in 1980, concluded that there were a significant number of feminists who identified, at least privately, as bisexual. ‘Many feminists’, she reported, ‘view their bisexuality not as a transition period to either lesbianism or heterosexuality, but see themselves as having made the transition to a positive acceptance of their bisexuality’.101 Speaking of her own sexuality, Gregory said that she ‘would only feel close to a man who could accept that sex was about me expressing my sexuality as much as about him expressing his, and that my sexuality did not consist in my responsiveness to him. Had I not found any man equal to that challenge’, she continued, ‘I would today identify as a lesbian’.102 Despite this commitment to ensuring the maintenance of self in her sexual relationships, Gregory acknowledged that ‘issues of personal autonomy and freedom within sexual relationships … are difficult and delicate balancing acts for all women’ and for some women, including Gregory, celibacy could provide a respite to this challenge.103
The Right to be Celibate
For members of US radical feminist collective Cell 16, celibacy was one of three essential practices in which women could liberate themselves, along with separatism and karate. Writing in 1968, Cell 16 member Dana Densmore argued that women invariably compromised themselves in sex noting, ‘[o]ne hangup to liberation is a supposed “need” for sex’.104 However, the position of Cell 16 was not commonly shared across feminist groups in the 1970s.105 In 1978, women were cautioned, in Our Bodies, Ourselves, that there were ‘some very real drawbacks to long periods of celibacy’ which included a lack of physical affection which is analogous to ‘a kind of starvation’.106 This attitude mirrored broader social attitudes to celibacy in which ‘celibacy was inextricably associated with repressed spinsters, and had been relegated to a shadowy past of outmoded morality’.107 However, by the late 1970s and into the 1980s, interest in celibacy as a choice was growing.
While references to celibacy in Spare Rib were sparse throughout the 1970s, notices of meetings, workshops, and even conferences started to appear from late 1979. Some explained the growing interest of women in celibacy as a ‘backlash against permissive culture’.108 Indeed, many women explained a desire to resist pressures around sex as the main reason for choosing to be celibate. In The New Celibacy: Why More Men and Women are Abstaining from Sex—and Enjoying It, the American psychologist Gabrielle Brown described the ‘new celibacy’ as a reaction to a social and cultural overemphasis on sex.109 Encouraging Spare Rib readers to read The New Celibacy, Elisabeth Hill from Ilford mirrored the sentiment emphasizing that women should have the ‘right to choose’ not to have sex without feeling inadequate or guilty.110 In her recent history of singleness, Emily Priscott suggests that ‘[i]n the new social climate, celibacy seemed like a backwards step, a renouncement of the bodily autonomy that Greer had fought for’, but for many women celibacy represented an option that was just part of their self-determined sexual lives.111 Sally Cline concluded from the interviews she conducted with women in Britain and North America: ‘All the reasons women give for celibacy are in some way related to a central notion of autonomy.’112 Furthermore, rather than celibacy being an act outside of sexuality, Cline argued that ‘the decision not to engage in sexual acts is undeniably a sexual statement’.113 Making this point, in 1980, the Celibacy Group renamed itself the Autonomy Group.114 For some, celibacy was not necessarily sexless. This was the distinction between celibacy—a choice—and asexuality—considered to be a more natural state of being without sexual urges. Celibacy, as discussed throughout the era, predominantly referred to an absence of sex with other people. It did not preclude masturbation. Even Cell 16 reluctantly suggested that women could relieve themselves through masturbation if necessary.
Many women saw celibacy as a legitimate part of, and consistent with, attachment. Sally Cline argued that ‘[t]he nature of the autonomy that celibate women are grasping for is significantly different from the traditional male view. It is a need for intimacy and independence which I have termed connected autonomy’.115 Many found value in celibacy as a means to deepen emotional connection with others and practiced an intentional celibacy, distinct from abstinence practiced for birth control, within their relationships.116 For others, celibacy was a way of recalibrating their sex lives. Some women found it difficult to understand their sexual needs and/or assert new patterns of sexual behaviour while sexually active. ‘Periods of celibacy’ were seen as a way for women to reset their relationship with sex. Writing for Spare Rib in 1981, Angela Hamblin reflected that it could be difficult for women to challenge the long-standing association of sex with penetration or to challenge the belief that all intimacy had to result in orgasm while in a sexually active relationship. Hamblin noted: ‘These periods of celibacy not only provided us with a breathing space within which we could begin to dismantle some of the long-established destructive patterns, but also gave us the opportunity to discover more about ourselves and our own needs’.117 For self-identified bisexual Deborah Gregory, celibacy provided an important respite to ‘get back to the core of who “we” are’, a process necessary to feeling connected to others without losing herself.118
Conclusion
The WLM disrupted a discourse about female sexuality that had persistently reflected that sexuality as both dependent on, and in binary opposition to, male sexuality. In this, it was unique. Writing in 1983, Lucy Bland situated the WLM in relation to the interests and concerns of feminists earlier in the twentieth century concluding that, while the WLM mirrored the concerns of the social purity feminists in demanding the right to say ‘no’ to sex, it also radically extended the reach of feminist concerns.119 It made a ‘claim [to] the right to be sexual’ in a way that was, amongst other things, ‘for the first time decentring heterosexuality from the dominant definition of sex, arguing for lesbianism as a political and not simply a personal choice, and for celibacy as an option that doesn’t necessarily spell no-sex, but sexual self-pleasure’.120 This was not just discussed in theoretical terms. Women thought about how the theory could, or should, apply to their personal lives. ‘The Women’s Movement changed me sexually’, declared 26-year-old Diane. ‘It’s made me aware of lesbian relationships, I know now I can enjoy women just as much as men. I know too if I want to have casual sex that I can do just that, you know bring a guy back to my place, I’m calling the shots too. When you become very aware of yourself you become confident, you realize you can have a relationship with somebody on your terms and that’s very important for women.’121
The significance of the WLM lay in both establishing a political framework that drew attention to women as an oppressed group (across all areas of their lives) and creating a social network to facilitate and support the work of women to challenge oppression. The WLM was fuelled by the local, regional, national and, even, transnational connections forged by women with each other. Through this network, thousands of women entered a shared discourse and formed relationships that would permanently alter their social landscape. Sarah Stoller draws attention to the ‘deep and enduring friendships forged through the WLM’ that were as new as they were comparatively intense.122 Like the adolescent girls described by Hannah Charnock, who looked to each other to test their understanding of sex and sexual currency, women in the WLM looked to each other to re-examine their sexuality, including the cultural construction of female sexuality.123 The network also provided women with considerable support and encouragement for experimentation with new ways of being sexual. At the same time, however, it was an environment in which women were vulnerable to scrutiny and assessment against feminist theories. While this encouraged women to re-evaluate the way they lived their lives, it also led to fissures across the movement as some women started to feel that other women were attempting to dictate the terms of their sexuality which, they argued, obviated the project of a self-determined sexuality.124 However, such differences and disputes should not be interpreted as diminishing the work undertaken by women to analyse the issue of autonomy in their sexual lives. Reflecting on similar work undertaken by the American WLM, Jane Gerhard explains that feminists ‘generated new accounts of American sexual thought … not through an orchestrated and coherent critique but through a range of writings from different and, at times, antithetical points of view’.125 In the same way, while much of the work undertaken by British feminists to disrupt British sexual norms appeared disjointed and, at times, antithetical, that work was also strongly connected by a concern with sexual autonomy. Involving thousands of women, each with different ideas about a fulfilled sexual life, there were bound to be differences of opinion but the conversation did not end with the fallout from the 1978 WLM Conference. Many of the references in this article source from the late 1970s and 1980s, some even from the 1990s. Importantly, the conversation extended not only through time (to inform future campaigns such as pornography and #metoo) but also in reach to women who did not necessarily identify with the WLM. This was not a niche movement. Thousands of women in the WLM were not only talking to each other but were taking the conversation into the mainstream. Adelaide Bry wrote in Cosmopolitan in 1975 of a ‘sexually aggressive woman’ who ‘is much less a type than she is a healthy strong personality who has taken it upon herself to define her own sexuality rather than allowing some man, or men, or the society at large to define it for her’.126 Almost ten years later, in 1984, feminist Eileen Fairweather discussed the merits of celibacy with Cosmopolitan readers while, in the same year, British psychologist Dr Anne Dickson—author of A Woman in Your Own Right (1982)—wrote about assertiveness in sexual relationships and facilitated ‘Cosmo’s Sexuality Seminar’.127 Even the readers of the more conservative Woman magazine were not immune with Virginia Ironside advising a reader to read The Female Eunuch.128
The sources analysed in this article do not permit an empirical assessment of the ways in which the WLM changed the sexual behaviours of women. They do make it clear, however, that the WLM at the very least changed the way in which large numbers of women thought and talked about their sexuality. A woman might still rely on her partner to take responsibility for her sexual pleasure. A woman might still be concerned to prioritize the desires and preferences of her partner. For many, however, these were now more self-conscious choices and perhaps the decades-long conversation and cumulative small actions ultimately created a little more space for women more generally—and not just feminists—to exercise a greater degree of sexual agency in their own interest.
Footnotes
1
In an account by Lindsay Mackie, ‘Guardian miscellany’, The Guardian (29 November 1973), 11.
2
For writing on women’s engagement with autonomy in the post-war period see, for instance, Lynn Abrams, ‘The self and self-help: Women pursuing autonomy in post-war Britain’, Transactions of the RHS, 29 (2019), 201–21; Emily Robinson et al., ‘Telling stories about post-war Britain: Popular individualism and the “crisis” of the 1970s’, Twentieth Century British History, 28 (2017), 268–304, Jon Lawrence, Me Me Me? The Search for Community in Post-War England (Oxford, 2019), and Florence Sutcliffe-Braithwaite and Natalie Thomlinson, ‘Vernacular discourses of gender equality in the post-war British working class’, Past and Present, 254 (2022), 277–313.
3
Sutcliffe-Braithwaite and Thomlinson, ‘Vernacular Discourses of Gender Equality in the Post-war British Working Class’, at 286.
Callum Brown, ‘Sex, Religion, and the Single Woman c. 1950-75: The Importance of a “short” Sexual Revolution to the English Religious Crisis of the Sixties’, Twentieth Century British History, 22 (2011), 189–215.
Marcus Collins, Modern Love: An Intimate History of Men and Women in Twentieth-Century Britain (London, 2003); Ben Mechen, ‘“Instamatic living rooms of sin”: Pornography, Participation and the Erotics of Ordinariness in the 1970s’, Contemporary British History, 36 (2022), 174–206.
Dominic Sandbrook, State of Emergency: The Way We Were: Britain 1970 – 1974 (London, 2011), 432. See also Collins, Modern Love, 176.
11
Brown, ‘Sex, Religion, and the Single Woman’, 194.
12
Journalist, David Bouchier who provided estimates of participation in 1983, is most commonly cited. At that time, while suggesting that there were, at most, 20,000 participants in the movement, Bouchier acknowledged that far more than this number had been seen at certain events, such as 60,000 women participating in demonstrations on abortion. Given that official circulation of just one of the feminist publications—Spare Rib—has generally been reported to be 25,000—with copies often shared beyond this number—Bouchier’s estimate for women engaged in feminist thought throughout the 1970s and 1980s seems very low. David Bouchier, The Feminist Challenge: the Movement for Women’s Liberation in Britain and the USA (London, 1983), 178. For a discussion of the circulation of Spare Rib, see Lucy Delap and Zoe Strimpel, ‘Spare Rib and the Print Culture of Women’s Liberation’, in Laurel Forster and Joanne Hollows, eds, Women’s Periodicals and Print Culture in Britain, 1940s-2000s : The Postwar and Contemporary Period (Edinburgh, 2020), 46–66.
13
Despite a noted lack of engagement by the WLM with the press (see Kaitlynn Mendes, ‘Reporting the Women’s Movement: News Coverage of Second-wave Feminism in UK and US Newspapers, 1968-1982, Feminist Media Studies, 11 (2011), 283–498, at 488), many feminists were writing for mainstream publications including Cosmopolitan, Nova and The Guardian throughout the period.
14
For an account of the diversity of the movement, see Sue Bruley, ‘“It didn’t just come out of nowhere did it?”: The Origins of the Women’s Liberation Movement in 1960s Britain’, Oral History, 45 (2017), 67–78.
15
The Tufnell Park group described a shared interest in an ‘analysis of the causes of women’s oppression and of the means to change it’ Shrew, 6 (October 1969), at 1.
16
Beatrix Campbell, ‘A Feminist Sexual Politics: Now you See it, now you don’t’, Feminist Review, 5 (1980), 1–18, at 1.
17
Hannah Charnock, ‘Teenage Girls, Female Friendship and the Making of the Sexual Revolution in England 1950-1980’, The Historical Journal, 63 (2020), 1032–53, at 1036.
18
Such as Stella Browne and Dora Marsden. See Lucy Bland, ‘Heterosexuality, Feminism and The Freewoman Journal in early Twentieth-century England’, Women’s History Review, 4 (1995), 5–23.
19
‘It is easy to forget how rare and genuinely shocking it was for British women to write/talk explicitly about sexuality in the 1970s.’ Hera Cook, ‘Angela Carter’s “The Sadeian Woman” and Female Desire in England 1960-1975’, Women’s History Review, 23 (2014), 938–956, at 939.
The term ‘the personal is political’—which was adopted by feminists internationally—was the title of an essay by Carol Hanisch in Notes from the Second Year: Women’s Liberation: Major Writings of the Radical Feminists (1970), at 76.
30
Kate Millett, Sexual Politics (Great Britain, 1971).
31
Barbara Hardy, ‘Consciousness Raising’, New York Times Book Review (6 September 1970, reprinted 6 October 1996), 96.
32
Millett, Sexual Politics, 118.
33
Germaine Greer, The Female Eunuch (London, 2012, first published London 1970), 17.
See, e.g. Marie Stopes, Married Love, ed. Ross McKibbin (Oxford, 2004); Theodoor. H. Van de Velde, M.D., Ideal Marriage: Its Physiology and Technique (London, 1926); and Helena Wright, The Sex Factor in Marriage: A Book for Those Who Are or Are About to be Married (London, 1930).
39
Stopes, Married Love, 37 (emphasis in original).
40
‘[Stopes’] main contribution was the construction of an autonomous female sexuality, which existed independently of male sexuality, and provided a realistic basis for a more equal interaction with men in the context of her own time.’ Cook, The Long Sexual Revolution, at 192.
41
Stopes, Married Love, 63.
42
Margaret Jackson, The Real Facts of Life: Feminism and the Politics of Sexuality c1850-1940 (London, 1994), 141. (emphasis in original)
Sigmund Freud, Three Essays on the Theory of Sexuality, trans and revised by James Strachey (New York, 1975), 87.
56
Anne Koedt, The Myth of the Vaginal Orgasm, Paper presented for the Women’s Liberation Conference in Chicago during Thanksgiving, 1968.
57
Kate Fisher has observed that many women ‘denied knowledge of the marriage advice literature, revealed only partial awareness of its existence, or even demonstrated hostility to its message’. Fisher, ‘Lay Back, Enjoy it and Shout Happy England’, 184.
58
Notes from the Second Year: Women’s Liberation: Major Writings of the Radical Feminists (New York, 1970), 37–47. In Britain, one of the distribution points was the London Women’s Liberation Workshop (See, e.g. Shrew (February/March 1970), at 2.
Sue Bruley, ‘Consciousness-raising in Clapham: Women’s Liberation as “lived experience” in South London in the 1970s’, Women’s History Review, 22 (2013), 717–38, at 729.
62
Boston Women’s Health Book Collective, Our Bodies, Ourselves: A Book By and For Women (New York, 1971).
63
Boston Women’s Health Book Collective, Our Bodies, Ourselves: A Health Book By and For Women, eds, Angela Phillips and Jill Rakusen (1971, British edition London, 1978).
64
Phillips and Rakusen, Our Bodies, Ourselves, at 136.
65
Anne Hooper, The Body Electric (London, 1980).
66
Eleanor Stephens had been a member of the Boston Women’s Health Collective before moving to London and joining the Spare Rib collective. She was a regular contributor to Spare Rib with a particular focus on sex and sexual politics.
Segal, Straight Sex, 266. More recently, Lucy Delap has explored the responses of anti-sexist men to feminist perspectives of sex including the connections drawn by women between sex and violence in the late 1970s and 1980s in particular, with responses ranging from inventive forms of experimentation in an effort to move away from penetration to degrees of sexual paralysis experienced by some men. For other men, it led to exploring sex with other men. Lucy Delap, ‘Rethinking Rapes: Men’s Sex Lives and Feminist Critiques’, Contemporary British History, 36 (2022), 253–76.
The LRF discussed the provenance of the expression in Love your Enemy?, 67.
93
Love your Enemy?, 9.
94
Elizabeth Wilson, ‘I’ll Climb the Stairway to Heaven: Lesbianism in the Seventies’, in Cartledge and Ryan, Sex & Love, 180–95, at 186.
95
Love your Enemy?, 11.
96
Love your Enemy?, 5.
97
Phillips and Rakusen, Our Bodies, Ourselves, 87.
98
Historian Margaretta Jolly suggests that ‘a minimum of 250,000 queer women over seventy today owe their sense of sexuality in part to the WLM’, See Margaretta Jolly, ‘After the Protest’, in Kristina Schulz, ed., The Women’s Liberation Movement: Impacts and Outcomes (New York and Oxford, 2017), 298–319, at 311. See also Sue Bruley, ‘Women’s Liberation at the Grass Roots: A View from some English Towns, c.1968–1990’, Women’s History Review, 25 (2016), 723–40.
99
Cartledge and Hemmings, ‘How Did we Get this Way?’, Spare Rib, 86 (September 1979), from 43, at 46.
100
Cartledge and Hemmings, ‘How Did we Get this Way?’.
101
Deborah Gregory, ‘From Where I Stand: A Case for Political Bisexuality’ (dated May 1981), Papers of Amanda Sebestyen. The Women’s Library, LSE Folder 7SEB/A/16 (Unpublished Papers, articles, discussions on sexuality), 4. An abridged version of this paper—titled ‘From where I stand: A case for feminist bisexuality’—was later published in Cartledge and Ryan, Sex & Love, 141–56.
102
Gregory, ‘From Where I Stand’, 12.
103
Gregory, ‘From Where I Stand’, 15.
104
Dana Densmore, ‘On Celibacy’, No more Fun and Games: A Journal of Female Liberation, 1 (1968), no page numbers.
Brown, The New Celibacy, 110. See also Liz Hodgkinson who claimed that celibacy had improved both her own and her husband’s wellbeing, in Lorna Hogg, ‘Today Lifestyles: “Sex is not compulsory …”’, Evening Herald (Dublin) (Monday 18 May 1987), 11.
Sarah Stoller, ‘Forging a Politics of Care: Theorizing Household Work in the British Women’s Liberation Movement’, History Workshop Journal, 85 (2018), 95–119, 111.
123
Charnock, ‘Teenage Girls, Female Friendship and the Making of the Sexual Revolution in England 1950-1980’.
124
‘Anger and resentment against the new radical feminist denunciation of heterosexuality was expressed by many straight feminists, particularly by socialist feminists. We began to avoid each other, and open feminist gatherings became increasingly stormy and unpleasant places to be.’ Lynne Segal, Straight Sex, 58. Angela Hamblin pointed out that, for many women, ‘transforming the basis upon which we are prepared to share our sexuality with men’ represented a largely ‘private struggle which, despite the support which many individual women have given each other, has not been validated by the women’s liberation movement as a whole’, in ‘Is a feminist heterosexuality possible?’, at 105. See also Athina Tsoulis, ‘Heterosexuality – A feminist option?’, Spare Rib, 179 (1987), from 22 and Beatrix Campbell, ‘A Feminist Sexual Politics’.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact [email protected] | The active engagement by women with this question of sexual selfhood belies a historiography of sexual revolution—real or imagined—in which women were the passive beneficiaries (or victims) of technological, cultural, religious, social and/or, economic shifts. Drawing on the writing of women in the feminist press, mainstream media, books, and pamphlets, this article describes the specific contribution of the WLM to shaping new possibilities for a sexuality defined, and controlled, by women. I argue that the WLM combined a powerful political framework with an influential social network to significantly contribute to a far-reaching process of deconstructing and recasting female sexuality and sexual relations.
‘Not long ago I made the intellectual decision to become bisexual’, 25-year-old London resident Linda told Guardian reporter Lindsay Mackie in 1973.1 Separated from her husband and in love with a man at the time of her interview, Linda continued: ‘I am in a bit of a state of flux with this new-found independence but I feel amazingly self-contained.’ For Linda, this independence was as much physical as anything else. ‘I come back home and I feel the outlines of myself’, she explained. I‘m not living through anyone else and almost every day I say to myself “I’m me and I’ll never be anything else and I’m satisfied with it.”’ Linda’s description of her choice as an ‘intellectual decision’ is telling. From the late 1960s, British women started a conversation about their bodies and sex as part of a broader, self-conscious exploration of autonomy.2
Linda had attended her first women’s group meeting in 1972. It was the start of an involvement with the Women’s Liberation Movement (WLM) that would alter her perspective and ultimately her life. ‘I see everything now in a political way’, she explained. | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://www.cnn.com/2015/07/21/opinions/kohn-seventies-sexual-revolution/index.html | The sex freak-out of the 1970s | The sex freak-out of the 1970s
Editor’s Note: “The Seventies” focuses on the sexual revolution Thursday at 9 p.m. ET/PT on CNN. Sally Kohn is an activist, columnist and television commentator. Follow her on Twitter: @sallykohn. The opinions expressed in this commentary are solely those of the author.
Story highlights
It turned out to be more about personal empowerment than changes in law and politics, she says
CNN
—
To borrow a cliché, the 1970s was all about sex, drugs and rock and roll. But arguably, it was the sex part that had the most enduring and profound effect on American society.
Sally Kohn
CNN
The 1970s saw the convergence of several phenomena related to sex, sexuality and gender. There was the women’s liberation movement, in which women and girls who had been long told they were the inferior sex finally took to the streets, the courts and the voting booths to assert their equality.
In 1970, the first Women’s Liberation Conference took place in England — the same year that Germaine Greer published “The Female Eunuch” and Robin Morgan published “Sisterhood Is Powerful, An Anthology of Writings from the Women’s Liberation Movement.” The next year, the first women’s liberation march took place.
Pop culture and politics collided on December 21, 1970, when the King of Rock 'n' Roll, Elvis Presley, visited President Richard Nixon in the White House Oval Office. The '70s may have been many things, but boring sure wasn't one of them. Check out 70 of the most unforgettable moments of the decade. For more, watch the CNN Original Series "The Seventies."
National Archive/Newsmakers/Getty Images
Apollo 13 returns safely to Earth —
The Apollo 13 spacecraft was intended to be the third landing on the moon, but the NASA crew aborted its mission after an oxygen tank exploded on board. The astronauts landed in the South Pacific on April 17, 1970. Here, lunar module pilot Fred W. Haise Jr. is about to be hoisted up to a recovery helicopter from the USS Iwo Jima.
NASA/Hulton Archive/Getty Images
Kent State massacre —
Four students died and nine others were wounded on May 4, 1970, when members of the Ohio National Guard opened fire on students protesting the Vietnam War at Kent State University in Ohio. In this Pulitzer Prize-winning photo, taken by Kent State photojournalism student John Filo, Mary Ann Vecchio can be seen screaming as she kneels by the body of slain student Jeffrey Miller.
John Filo/AP
The Beatles call it quits —
The "Fab Four," pictured here in 1970, released their final album, "Let It Be," on May 8, 1970. The album came one month after Paul McCartney announced the group's breakup.
Hans J. Hoffmann/ullstein bild/Getty Images
'Flying Bobby' —
In one moment, Bobby Orr became a hockey legend. On May 10, 1970, Orr scored an overtime goal in Game 4 of the Stanley Cup Finals, giving the Boston Bruins their first championship since 1941. In 1971, Orr signed the first million-dollar contract in NHL history -- $200,000 a year for five years -- and in 1979 he became the youngest NHL Hall of Famer when he was inducted at the age of 31.
Ray Lussie/Boston Herald American/AP
Gay rights movement gains popularity —
Gay rights activists Foster Gunnison and Craig Rodwell lead a gay rights march in New York on June 28, 1970, then known as Gay Liberation Day. The march was held on the first anniversary of the police raid of the Stonewall Inn, a popular gay bar in New York's Greenwich Village. The raid led to demonstrations and protests by the gay community. The Stonewall riots helped bring together the gay community in New York, and by 1971 gay rights groups had formed in almost all of the major cities in America.
Fred W. McDarrah/Getty Images
'Hey! Ho! Let's go!' —
The '70s ushered in a new musical movement that put a premium on speed, simplicity and raw power. Bands like the Ramones, pictured, and the Sex Pistols put to waste the trippy, hippie music of the '60s, replacing it with short, fast songs filled with attitude and angst. It could only be called one thing: punk.
Michael Ochs Archives/Getty Images
Indira Gandhi re-elected —
Indira Gandhi, the only woman to ever hold the office of Prime Minister of India, won a second term in a landslide victory in March 1971. She would be re-elected to a fourth term in 1980, but she was assassinated by two of her bodyguards in 1984.
Fox Photos/Getty Images
Disney World opens —
A crowd in Orlando waits for Walt Disney World's Main Street to open in October 1971. The park cost an estimated $400 million to build and now attracts around 25 million visitors annually. When Disney World opened in 1971, the price for admission was $3.50. A single-day ticket now is $105 for anyone over 10 years old.
AP Photo
'Bloody Sunday' —
On January 30, 1972, British soldiers opened fire against protesters in Londonderry, Northern Ireland, who were marching against British rule. Thirteen people were killed on the scene, and more than a dozen were injured. After the shooting, recruitment and support for the Irish Republican Army skyrocketed. Three decades of violence known as The Troubles followed, and almost 3,000 people died.
Popperfoto/Getty Images
Nixon in China —
Richard Nixon became the first U.S. President to visit China. His trip in February 1972 was an important step in building a relationship between the two countries.
Fotosearch/Getty Images
'Napalm Girl' —
Associated Press photographer Nick Ut photographed terrified children running from the site of a napalm attack during the Vietnam War in June 1972. A South Vietnamese plane accidentally dropped napalm on its own troops and civilians. Nine-year-old Kim Phuc, center, ripped off her burning clothes while fleeing. The image communicated the horrors of the war and contributed to the growing anti-war sentiment in the United States. After taking the photograph, Ut took the children to a hospital.
Nick Ut/AP
'Hanoi Jane' —
In July 1972, in the midst of the Vietnam War, actress Jane Fonda visited the North Vietnamese city of Hanoi and criticized the U.S. role in the war, leading many to call her "anti-American." Earlier this year, Fonda called the trip an "incredible experience" but expressed some regret. "It hurts me, and it will to my grave, that I made a huge, huge mistake that made a lot of people think I was against the soldiers," Fonda said during an appearance in Frederick, Maryland.
AFP/Getty Images
Porn goes mainstream —
In any other year it might sound strange, but in 1972 one of the most popular films of the year was a porno. "Deep Throat" was one of the first pornographic films to receive mainstream attention, and it made $3 million in its first six months of release. It also took on an additional layer of cultural significance when the secret informant in the Watergate scandal went by the pseudonym "Deep Throat."
Bryanston Pictures
Cold War chess championship —
American Bobby Fischer, right, and Russian Boris Spassky play their last game of chess together in Reykjavik, Iceland, on August 31, 1972. Fischer defeated Spassky to become the World Chess Champion, ending a Soviet win streak that dated to 1948.
J.W. Green/Getty Images
Terror at the Olympics —
On September 5, 1972, the Summer Olympics in Munich, Germany, were in the throes of a hostage crisis. Two Israeli athletes had been killed and nine taken hostage by members of Black September, a Palestinian terrorist movement demanding the release of political prisoners by the Israeli government. Hours later, all nine hostages, five terrorists and one police officer were dead.
Kurt Strumpf/AP
The perfect season —
The Miami Dolphins, coached by Don Shula, win Super Bowl VII in January 1973 and become the only NFL team in history to win a championship with an undefeated record.
AP
'I'll make him an offer he can't refuse' —
"The Godfather," directed by Francis Ford Coppola, took home several Academy Awards in March 1973, including Best Picture and Best Adapted Screenplay. The film was based on the best-selling novel by Mario Puzo and starred, from left, James Caan, Marlon Brando, Al Pacino and John Cazale. Brando won the Oscar for Best Actor.
In 1973, the Sears Tower opened in Chicago, overtaking the World Trade Center as the tallest building in the world. The tower, now known as the Willis Tower, is the second-tallest building in the United States today.
Paul Slade/Paris Match/Getty Images)
Bruce Lee dies —
Martial-arts actor Bruce Lee, seen here training in a scene from the film "Enter the Dragon," dies in July 1973 just days before the movie's release. He was 32. The film would cement Lee's legend and bring martial arts to the forefront of pop culture.
Warner Brothers/Getty Images
Scandal in the Nixon administration —
U.S. Vice President Spiro T. Agnew addresses the media on August 8, 1973, saying he would not resign while being investigated on charges of tax fraud, bribery and conspiracy. However, Agnew resigned in October 1973 after pleading no contest to a single count of income-tax evasion. He was the second vice president to resign in U.S. history.
AP
The 'Twin Towers' —
From the time of their completion in 1973 until their destruction in the terror attacks of September 11, 2001, The World Trade Center's twin towers stood as an iconic part of the New York City skyline.
Peter J. Eckel/AP
'Battle of the Sexes' —
In a nationally televised tennis match on September 20, 1973, Bobby Riggs, a former No. 1 tennis player, took on Billie Jean King, one of the top female tennis players at the time. Earlier in the year, Riggs put out a challenge to all female tennis players, saying no woman could beat him. King beat Riggs 6-4, 6-3, 6-3 and claimed a $100,000 prize.
TSN Archives/Getty Images
'The Exorcist' hits theaters —
"The Exorcist," based off the best-selling novel by William Peter Blatty about a demonically possessed 12-year-old girl, was released in December 1973. It went on to become one of the most popular films of all time. It was the first horror film to be nominated for a Best Picture Oscar, and Blatty won the Academy Award for Best Adapted Screenplay.
Universal History Archive/Getty Images
The pocket calculator —
By 1973, Clive Sinclair had introduced a series of pocket calculators that changed the industry, making calculators small and light enough to fit in your pocket. They were not only much smaller and thinner than their competitors, but also much cheaper, making their advanced technology available to the masses.
Getty Images
Energy crisis —
Cars in Brooklyn, New York, line up for gas in January 1974. In October 1973, an oil embargo imposed by members of OPEC led to skyrocketing gas prices and widespread fuel shortages.
Allan Tannenbaum/The LIFE Images Collection/Getty Images
Hammerin' Hank —
Hank Aaron breaks Babe Ruth's career home run record, hitting home run No. 715 at Atlanta's Fulton County Stadium in April 1974. Aaron finished his career with 755 home runs, a record that stood until Barry Bonds broke it in 2007.
Harry Harrris/AP
Baryshnikov defects —
Russian dancer Mikhail Baryshnikov, left, tapes a TV special in Canada, where he defected in June 1974. Soon after, Baryshnikov moved to the United States and started working with the New York City Ballet and the American Ballet Theatre. In 1979, he earned an Academy Award nomination for his supporting role in the film "The Turning Point."
Dina Makarova/AP
Nixon resigns —
U.S. President Richard Nixon gestures in the doorway of a helicopter on August 9, 1974, after leaving the White House following his resignation over the Watergate scandal. Nixon's resignation marked the end to one of the biggest political scandals in U.S. history, which began in 1972 after a break-in at the Democratic National Committee's headquarters at the Watergate complex. Five men were arrested for the burglary, and the FBI and Washington Post reporters Bob Woodward and Carl Bernstein were able to trace them back to Nixon and the White House.
Bill Pierce/LIFE Images Collection/Getty Images
Power of the press —
Reporters Bob Woodward, right, and Carl Bernstein sit in the newsroom of the Washington Post newspaper in May 1973. Woodward and Bernstein's reporting on the Watergate scandal led to President Nixon's resignation and won them a Pulitzer Prize. In 1976, Robert Redford and Dustin Hoffman would portray the pair in the film adaptation of their book "All the President's Men."
AP
Cover girl —
Beverly Johnson made history in August 1974 when she became the first African-American model to appear on the cover of Vogue magazine in the United States.
Gems/Redferns/Getty Images
Muhammad Ali watches heavyweight champion George Foreman fall to the canvas during their title bout in Kinshasa, Zaire, in October 1974. Ali's upset victory over the undefeated Foreman won him back the titles he was stripped of in 1967 for refusing induction into the U.S. Army.
AP
Cambodian genocide —
From 1975-1979, Pol Pot -- seen here at far left -- led the Khmer Rouge communist movement in Cambodia. During his reign, at least 1.7 million people -- nearly a quarter of Cambodia's population -- died from execution, disease, starvation and overwork, according to the Documentation Center of Cambodia.
Kyodo New/AP
The fall of Saigon —
In April 1975, the fall of Saigon to the North Vietnamese effectively marked the end of the Vietnam War. Here, U.S. Marines guard civilians during evacuations at Tan Son Nhut airbase. The country became the Socialist Republic of Vietnam on July 2, 1976.
Dirck Halstead/Liaison/Getty Images
Birth of the blockbuster —
In the summer of 1975, Steven Spielberg had people flocking to the theaters instead of the beaches. The success of "Jaws" -- his first hit movie -- set up summer as the season for Hollywood's biggest and highest-grossing movies.
Universal Pictures/Getty Images
Arthur Ashe wins Wimbledon —
American tennis player Arthur Ashe became the first black man to win Wimbledon when he defeated Jimmy Connors in July 1975. Ashe retired from tennis in 1980 and became a spokesperson for HIV and AIDS after announcing he had contracted HIV from a blood transfusion. Ashe died on February 6, 1993, from AIDS-related pneumonia.
Ed Lacey/Popperfoto/Getty Images
Commander of the Soviet crew of Soyuz, Alexei Leonov, left, and commander of the American crew of Apollo, Thomas Stafford shake hands July 17, 1975 in space, somewhere over Western Germany, after the Apollo-Soyuz docking maneuvers.
NASA
'Not ready for Prime Time' —
Saturday night television changed forever on October 11, 1975, when the sketch comedy show "Saturday Night Live" made its debut. Comedian George Carlin was the first host, joining a cast of young and upcoming comics known as "The Not Ready for Prime Time Players." "SNL" is now in its 40th year as one of the longest running shows in television history.
Warner Bros./Getty Images
Dazzling Elton —
English singer Elton John, one of the biggest artists of the '70s, performed two sold-out shows at Los Angeles' Dodger Stadium in October 1975, performing for more than three hours each night. John, known for his flamboyant outfits and oversized sunglasses, was decked out for the occasion in a sequined Dodgers baseball uniform.
Terry O'Neill/Getty Images
The Concorde takes off —
It broke the sound barrier and cut flight times in half. On January 21, 1976, the first commercial Concorde flight took place from London to Paris, cruising at speeds of 1,350 mph. The Concordes' flights would be short lived, however, as fewer than 20 ever saw commercial use. The last commercial Concorde flight took place on October 24, 2003.
Keystone-France/Gamma-Keystone/Getty Images
Happy 200th birthday, America! —
Fireworks at the Statue of Liberty light up the New York Harbor on July 4, 1976, as the country celebrates the bicentennial anniversary of the Declaration of Independence. Patriotic events took place around the country that year.
AP
'Angels' flying high —
On September 22, 1976, a blonde bombshell dropped into America's homes with the debut of the television show "Charlie's Angels." Farrah Fawcett and co-stars Kate Jackson and Jaclyn Smith became an instant hit with audiences. To this day the show remains a lasting image of the 70s despite getting mixed reviews from critics.
Getty Images
Barbara becomes the news —
In October 1976, Barbara Walters, seen at left with actress Barbra Streisand, became the first woman to co-anchor a major network evening newscast. ABC made history before she even went on air, signing Walters to a $1 million annual contract to make her the highest-paid journalist at that time. She only co-anchored the show for a year and a half, but she would go on to host ABC shows such as "20/20," "The View" and "Barbara Walters Specials" until her retirement in 2014.
ABC Photo Archives/Getty Images
Disco ruled the charts in the late '70s but found some unlikely superstars in the form of the Village People. Their name was inspired by New York's Greenwich Village, which had a large gay population at the time, and the group became known for their onstage costumes and suggestive lyrics. In 1978, their songs "Macho Man" and "Y.M.C.A." became massive hits and brought them mainstream success.
Jazz Archiv Hamburg/ullstein bild/Getty Images
From peanut farmer to President —
Jimmy Carter embraces his wife, Rosalynn, in November 1976 after he was elected as the 39th President of the United States. Carter, a Democrat and former governor of Georgia, defeated incumbent Gerald Ford. During his time in office, Carter created the Department of Energy and Department of Education. Since leaving the office in 1980, he has remained active in fighting for human rights and ending disease around the world with his nonprofit organization, the Carter Center.
Hulton Archive/Getty Images
'Roots' premieres —
Cicely Tyson, left, and Maya Angelou star in the television miniseries "Roots." The series premiered in January 1977, airing for eight consecutive nights and attracting a record number of viewers. Based off Alex Haley's novel, "Roots" told the story of an African boy sold into slavery in America and the following generations of his family. The show was viewed by more than half of the U.S. population in 1977, and it received 37 Emmy nominations.
ABC Photo Archives/ABC/Getty Images
'In a galaxy far, far away' —
May 25, 1977, was a historic day for sci-fi fans and moviegoers everywhere. George Lucas' "Star Wars" opened in theaters, introducing the world to characters such as Luke Skywalker, Chewbacca, R2D2 and, of course, Darth Vader. The "Star Wars" franchise is still one of most lucrative and popular film series around today.
Universal History Archive/UIG/Getty Images
Son of Sam —
Serial Killer David Berkowitz, known as the Son of Sam, was arrested on August 10, 1977, after a series of shootings and murders that police believe began in the summer of 1976. Berkowitz was convicted of killing six people and wounding seven during his crime spree, which garnered large amounts of press coverage. He was known for targeting young women and sending cryptic, antagonizing letters to the New York police.
Hulton Archive/Getty Images
Apple plants the seed for the digital revolution —
In 1977, Apple Computers introduced the Apple II, which became one the first successful home computers. Co-founders Steve Jobs, pictured here, and Steve Wozniak formed the Apple Computer Company in 1976. Along with Bill Gates' Microsoft, which was founded in 1975, Apple helped ignite the digital age we live in today.
Ralph Morse/The LIFE Images Collection/Getty Image
New York City goes dark —
In the middle of the summer of 1977, New York City experienced a power outage that caused much of the city to go dark. The blackout lasted two days, from July 13-14. As the city was in the midst of a financial crisis and the terror of the Son of Sam loomed over residents, many took to the streets and began looting. Police reported that looting in some areas of the city continued well into the daylight hours, and thousands of people were arrested.
AP
The King is dead —
Elvis Presley, the King of Rock 'n' Roll, died August 16, 1977, at the age of 42. He was still touring and recording throughout the 1970s, but his unexpected death sealed his legacy as one of the greatest cultural icons of the 20th century.
Ronald C. Modra/Getty Images
Game on —
The Atari 2600 was released in September 1977, bringing the world of video games into households everywhere. Packaged with two joystick controllers and one cartridge game, the Atari 2600 sold 250,000 units in 1977. By 1979, 1 million units were sold. What some believed at the time to be a fad has now turned into a billion-dollar-a-year industry.
Corbis
Mr. October —
Reggie Jackson of the New York Yankees hits his third home run of the game on October 18, 1977, leading the Yankees to a World Series win over the Los Angeles Dodgers. Jackson had a .357 batting average over the 27 World Series games throughout his career, earning him the nickname "Mr. October." Jackson and the Yankees would repeat as World Series champions the following year.
Louis Requena/MLB Photos/Getty Images
Disco fever —
Disco music sweeps the nation with the 1977 film "Saturday Night Fever" starring John Travolta. Catapulted by a soundtrack containing five No. 1 singles -- including "Staying Alive" and "Night Fever" -- the film became a huge commercial success. The soundtrack stayed on top of the album charts for six months, and Travolta earned an Academy Award nomination for Best Actor.
Paramount Pictures
A test tube produces life —
Louise Brown became the world's first test-tube baby on July 25, 1978. Dr. Robert Edwards, left, and Patrick Steptoe, right, pioneered the process of in vitro fertilization, which injects a single sperm into a mature egg and then transfers the egg into the uterus of the woman. In 2010, Edwards won the Nobel Prize in Medicine for the development of in vitro fertilization, which has helped families conceive more than 5 million babies around the world.
Keystone/Getty Images
Peace in the Middle East —
Egyptian President Anwar Sadat, left, joins hands with Israeli Prime Minister Menachem Begin, right, on September 18, 1978, after the Camp David Accords were signed in Maryland. After 12 days of secret meetings, the two sides agreed upon a step toward peace. U.S. President Jimmy Carter, center, personally led the lengthy negotiations and discussions between the two parties.
David Hume Kennerly/Getty Images
The world welcomes a new Pope —
His name was Karol Jozef Wojtyla, but the world knew him as Pope John Paul II. Born in Poland, John Paul II was the first non-Italian Pope in more than in 400 years when he became Pope in 1978. He made his first public appearance on October 16, 1978, at St. Peter's Square in the Vatican, and before his death in 2005 he was beloved for his commitment to human rights around the world.
Massimo Sambucetti/AP
The Jonestown massacre —
Bodies lie around the compound of the People's Temple in Jonestown, Guyana, on November 18, 1978. More than 900 members of the cult, led by the Rev. Jim Jones, died from cyanide poisoning; it was the largest mass-suicide in modern history.
David Hume Kennerly/Getty Images
Assassination of Harvey Milk —
In 1977, Harvey Milk was elected to the San Francisco Board of Supervisors, making him the first openly gay person to be elected to a public office. Milk started his political ambitions in San Francisco in the early '70s, but he did not hold an office until he was appointed to the Board of Permit Appeals in 1976 by Mayor George Moscone. Milk's career was tragically cut short on November 27, 1978, when he and Moscone were assassinated.
AP
Music goes mobile —
The sound barrier is broken once again in the '70s, but this time at walking speed. Sony introduces the Walkman, the first commercially successful "personal stereo." Its wearable design and lightweight headphones gave listeners the freedom to listen to music privately while out in public. The product was an instant hit. The Walkman was a mark of coolness among consumers, setting a standard for future generations of personal devices like the Apple iPod.
Sony
Magic vs. Bird —
The 1979 national championship game between Michigan State and Indiana State still ranks as the most-watched college basketball game of all time, thanks to two up-and-coming superstars: Michigan State's Earvin "Magic" Johnson, bottom, and Indiana State's Larry Bird. Johnson's Spartans won the NCAA title, but the two players' rivalry was only just beginning. During their pro careers in the NBA, Bird's Boston Celtics and Johnson's Los Angeles Lakers would meet in the NBA Finals three times in the '80s.
AP
Three Mile Island —
On March 28, 1979, the worst nuclear accident in U.S. history took place in Pennsylvania when large amounts of reactor coolant and radioactive gases from the Three Mile Island power plant were released into the environment. Within days of the accident, 140,000 people evacuated their homes within a 20-mile radius of the plant. The accident brought widespread attention to reactor safety and large protests from anti-nuclear groups. Cleanup from the accident began in August 1979 and was not completed until December 1993.
Jarnnoux Patrick/Pars Match/Getty Images
The Iron Lady —
Margaret Thatcher celebrates her first election victory, becoming Britain's first female Prime Minister on May 4, 1979. As leader of the Conservative Party, Thatcher served three terms as Prime Minister, holding the office until 1990. That made her the longest-serving British Prime Minister of the 20th century.
Central Press/Getty Images
Deadliest day in U.S. aviation —
Only moments after takeoff, an engine separated from American Airlines Flight 191, causing the plane to crash in a field near Chicago's O'Hare International Airport on May 26, 1979. All 271 people on board the plane -- and two people on the ground -- were killed, making it the worst aviation accident ever on U.S. soil.
AP
SALT II —
The Strategic Arms Limitation Talks, otherwise known as SALT, were a series of meetings and treaties designed at limiting and keeping track of the missiles and nuclear weapons carried by the United States and the Soviet Union. The first treaty was signed in 1972, and the second one was signed in 1979. Six months after the second signing, however, the Soviet Union invaded Afghanistan, and the United States never ratified the SALT II agreement.
AP
An 'American Hustle' —
Scandals shaped a large part of the '70s political atmosphere, and the decade ended on a big one. During a two-year investigation, the FBI set up a sting operation dubbed "Abscam," videotaping politicians accepting bribes from a phony Arabian company in return for favors. The sting resulted in the conviction of six U.S. representatives, one senator, a mayor from New Jersey and members of the Philadelphia City Council. The operation was the inspiration for David O. Russell's 2013 film "American Hustle."
AP
From Boy Scout to murderer —
Ted Bundy, one of the most notorious serial killers of all-time, stands trial in June 1979 for two of his many murders. Bundy received three death sentences for murders he committed in Florida, and he was executed on January 24, 1989. Bundy confessed to 30 murders before his death, but officials believe that number could be higher.
AP
Iran hostage crisis —
In November 1979, 66 Americans were taken hostage after supporters of Iran's Islamic Revolution took over the U.S. Embassy in Tehran, Iran. All female and African-American hostages were freed, but President Carter could not secure the other 52 hostages' freedom. They were finally released after Ronald Reagan was sworn in as President 444 days later. Many feel the Iran hostage crisis cost Carter a second term.
Alain Mingam/Gamma-Rapho/Getty Images
A living saint —
Agnes Gonxha Bojaxhiu, or "Mother Teresa," won the Nobel Peace Prize in 1979 for dedicating her life to helping the poor. Her foundation in Kolkata, India, "The Missionaries of Charity," took care of orphans, the sick and elderly. In 2003, she was beatified.
Getty Images
70 historic moments from the 1970s
There also was sexual liberation, which had something to do with women liberating themselves in the bedroom, too, but had as much to do with loosening norms around sex. In 1960, half of 19-year-old women who were unmarried had not yet had sex. By the late 1980s, as Nancy Cohen pointed out, two-thirds of all women had done the deed by age 18.
Cohen also noted that the invention of the birth control pill in the 1960s helped pave the way. Within five years after the first pill went on the market in 1960, 6 million American women were taking it. These women and others, and their male partners, entered the next decade literally with a radically different experience of sex and freedom. The year 1972 alone saw the publication of such groundbreaking books as “The Joy of Sex” and “Open Marriage.”
The ’70s also brought nonheterosexual sex into the spotlight. In 1969, when a gay bar in New York was raided by police, protests erupted and what became known as the Stonewall Riots was the formative moment of the gay rights movement that would continue to grow into the next decades.
For gay, lesbian, bisexual and transgender Americans, the 1970s was an era of increasing awakening and visibility, as well as backlash and persecution. In 1970, the first gay pride parade was held to commemorate the Stonewall Riots, and in 1973, the American Psychiatric Association finally saw fit to remove homosexuality from its official list of mental disorders.
Arguably, one of the quintessential songs of the 1970s captures all perspectives on the sexual revolution. “Aaaaaah, FREAK OUT!” sang the band, Chic, in their 1978 chart topper. While the old guard was certainly freaking out about the quickly and wildly shifting terrain of traditional American values, those doing the shifting were enjoying the ride. The song almost mocks those who are actually freaking out, turning their angst into a dance craze. “Come on along and have a real good time,” Chic invites. That moral ground you feel shifting below you? Think of it more as a sensual undulating and get with the groove.
It’s easy to look back and see the boundaries of the era’s aspirations. The Vietnam War, in the 1970s, and AIDS, in the 1980s, killed people and rightfully became preoccupying life-and-death issues. Complete sexual liberation and the brave new peaceful world a generation longed for ran headlong into hard and brutal reality.
In hindsight, the movements of the 1970s were much more about cultural triumphs than they were about legal and political changes. That the idea of women’s equality is more widely accepted today than it was 40 years ago is a victory. Yet the fact that women still earn a fraction of what men earn on average, and women of color even less, that rape and sexual assault remain so prevalent, that access to birth control and abortion and sex education are so actively still contested — these are reminders of how far we have yet to go.
“In the 1970s the sexual revolution was really mostly about sex,” wrote Hanna Rosin. “But now the sexual revolution has deepened into a more permanent kind of power for women.” Or, more accurately I think, at least a sense of personal power. But empowerment hasn’t necessarily translated into real economic and political leverage.
Are there more women running major companies, transgender men and women starring in Hollywood productions, parents nurturing their children’s healthy sexuality and now the nationwide right to marriage equality? Yes! But all around us — from the Bill Cosby story to campus rape to the killing of Kristina Gomez Reinwald, one of many transgender murder victims — there are almost daily reminders of the reality of subjugation based on gender, race and sexuality.
Without a doubt, women and gay people, but straight men, too, experienced more individual freedom in the 1970s. But that doesn’t mean we are all liberated. Just like one black president or one female president doesn’t mean there’s no more racism or sexism.
Thus perhaps the greatest legacy of the 1970s wasn’t that it set us on a path to a destination — one we clearly haven’t reached yet— but that it defined desire, desire not only for individual, bodily autonomy, self-expression and pleasure but a desire that society fully reflect and respect our freedom. Wherever we are now, with respect to women’s rights and LGBT rights and sexual freedom, is a direct result of the 1970s. And the fact that we’re not satisfied yet is also the legacy of that era. | In 1970, the first gay pride parade was held to commemorate the Stonewall Riots, and in 1973, the American Psychiatric Association finally saw fit to remove homosexuality from its official list of mental disorders.
Arguably, one of the quintessential songs of the 1970s captures all perspectives on the sexual revolution. “Aaaaaah, FREAK OUT!” sang the band, Chic, in their 1978 chart topper. While the old guard was certainly freaking out about the quickly and wildly shifting terrain of traditional American values, those doing the shifting were enjoying the ride. The song almost mocks those who are actually freaking out, turning their angst into a dance craze. “Come on along and have a real good time,” Chic invites. That moral ground you feel shifting below you? Think of it more as a sensual undulating and get with the groove.
It’s easy to look back and see the boundaries of the era’s aspirations. The Vietnam War, in the 1970s, and AIDS, in the 1980s, killed people and rightfully became preoccupying life-and-death issues. Complete sexual liberation and the brave new peaceful world a generation longed for ran headlong into hard and brutal reality.
In hindsight, the movements of the 1970s were much more about cultural triumphs than they were about legal and political changes. That the idea of women’s equality is more widely accepted today than it was 40 years ago is a victory. Yet the fact that women still earn a fraction of what men earn on average, and women of color even less, that rape and sexual assault remain so prevalent, that access to birth control and abortion and sex education are so actively still contested — these are reminders of how far we have yet to go.
“In the 1970s the sexual revolution was really mostly about sex,” wrote Hanna Rosin. “But now the sexual revolution has deepened into a more permanent kind of power for women.” Or, more accurately I think, at least a sense of personal power. | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://independent-magazine.org/2018/11/11/there-were-no-laws-against-it-then/ | Kerry McElroy Writes about Elizabeth Taylor and Hugh Hefner for ... | Menu ▼
Essays
“There Were No Laws Against It Then”
Abuse, Stardom, and the False Promise of Sexual Revolution in 1960s Hollywood
Tippi Hedren (1930-) was subjected to extraordinary physical and metal abuse and poor labor conditions by legendary director Alfred Hitchcock on the 1962 set of The Birds.
Kerry McElroy writes about Elizabeth Taylor, Hugh Hefner, and other icons of mid-century culture for this sixth series installment of Bette, Marilyn, and #MeToo: What Studio-Era Actresses Can Teach Us About Economics and Rebellion, Post-Weinstein.
The 1950s was the era of the frustrated housewife and The Feminine Mystique in American life. By the early 1960s, the social tumult that had been simmering below the surface finally began to boil over for women, minorities, and in anti-Vietnam protest. But with so much change in the air, how did American corporate male interests respond? Further, in an era when women were beginning to throw out their girdles, read feminist theory, and join consciousness-raising movements, what became of the most objectifying business of all—Hollywood? As has happened over and over again in Hollywood history, things didn’t go as we might expect. The 1960s represents, like previous decades, a lost opportunity for women as regards equality in the film industry.
One particularly apt late studio-era Hollywood dynamic that demonstrated mistreatment as still the norm was the relationship between director and actress. The quintessential 1960s director, Alfred Hitchcock, had a typical albeit extreme version of the attitude that actors were annoyances—bodies to stage, voices to recite lines—who got in the way of a director’s vision and genius. When it came to gender, he once said, “‘I always believed in following the advice…‘Torture the women!’” (Spoto xix). Tippi Hedren, his most famous leading lady, felt the brunt of his misconduct.
Hitchcock’s 1960s behavior—a blending of tyrannical work practices of jealousy and stalking—was not novel in Hollywood. His treatment of Hedren stands, however, as an acute example of the era’s director-actress horror story. He bullied her relentlessly on the set of The Birds, including a “two-minute assault scene [that] required a full week of eight hour days of shooting, days that left Tippi Hedren on the brink of emotional and physical collapse” (Tatar 38). In the ultimate irony, animal welfare specialists turned up to monitor the treatment of the birds, but no one did the same for Hedren (Ibid.).
Hedren rejected Hitchcock’s casting couch come-ons. As she said in an interview, “He made it very clear what was expected of me, but I was equally clear that I wasn’t interested” (Hedren qtd. In Oglethorpe). When she spurned his advances, the director’s passion soon turned to spite. Hitchcock became the worst type of stalker-boss, jealously monitoring anyone with whom Hedren had social relationships. In the face of Hedren’s steadfast efforts to maintain her dignity, Hitchcock resolved to destroy her career. Hitchcock refused to release Hedren from her contract to work with any other directors. She said candidly:
“He trapped me and ruined my career. Producers would ring up…offering me parts and Hitchcock would simply tell them that I wasn’t available….For two more years, he kept me under contract, paying me $600 a week….There was talk of me receiving an Academy Award nomination but he stopped that before it even got started” (Ibid.).
Hedren’s candid reflections are useful in this #MeToo movement, especially in light of the reckoning that has followed the Weinstein scandal. She is frank in describing a culture of enabling assistants and wives, individuals who knew that the abuse was going on but were afraid or unwilling to intervene. Hedren once recalled a disturbing interaction with the director’s wife: “She knew full well what was going on. I said: ‘It would just take one word from you to stop this’, and she just walked away, with a glazed look in her eyes” (Ibid.).
After being the world’s biggest star in the 1950s, Marilyn Monroe (1926-1962) fell into a period of addiction and instability in her thirties, culminating in her death.
In her older age and recent interviews, Hedren has demonstrated a sophisticated connection between her own treatment in pre-feminist Hollywood and today’s #MeToo moment. Hedren’s adoption of a modern feminist lexicon for what she endured in the 1950s and ‘60s is revelatory. In a recent interview, Hedren described the abuse she endured, including the mental, sexual, and financial, saying “it was nothing new in Hollywood in those days.” She added, “You have to remember that this was a very different time from now and Hollywood was a very different place….Of course, sexual harassment still occurs, but there are far more safeguards in place to prevent it, far more awareness and knowledge of the dangers….There were no laws against it then,’” (Hedren qtd in Hiscock). Since 2017, Hedren has given interviews arguing how difficult it was for women to speak about stalking, harassment, and sexual assault in those terms during her working years. It was only in later life that she recognized that what had been “normal” treatment in the Hollywood of her youth was actually verifiable abuse and assault. After the Weinstein scandal broke, Hedren went on record saying that his abuse reminded her of Hitchcock’s (Teeman).
For an example of a woman of the 1960s generation who never got her #MeToo moment, we can again turn to the biggest global star of the mid-twentieth century. In the last article, we looked at Marilyn Monroe as something of an economic sociologist of Hollywood. By the 1960s, however, her long career in Hollywood had taken its toll. As has been well-documented, the final years of her life were marked by mental illness and abuse. Living for years under toxic working conditions, which included addiction brought upon by studio-prescribed drug cocktails, had had a deteriorating and dangerous effect on Monroe’s stability. The nexus of trauma and mental illness that affected so many star-actresses of her era reached a kind of apotheosis in the cultural figure and real life biography of Monroe. As such, she became a subject of inquiry for many important feminists of the Second Wave era.
Andrea Dworkin, the anti-heterosexual, anti-porn activist, may seem like an unlikely person to write about Monroe. But in fact, Dworkin makes an extremely astute point about Monroe’s psyche and what the industry’s conditions had done to women like her. Whether or not Monroe committed suicide or died of accidental overdose, society must confront that she “hadn’t liked It all along—It—the It they had been doing to her, how many millions of times?….Her apparent suicide stood at once as accusation and answer: No, Marilyn Monroe, the ideal sexual female, had not liked it,” (Dworkin qtd in Steinem 179). One need only look at stills of the nighttime shooting of the famous subway grate scene in The Seven Year Itch, where hundreds of male passersby who had formed a mob ogled Monroe in her underwear, to see the violence (and violation) underlying her ostensibly glamorous life.
As the country was “awakening” through political protest and the sexual revolution, things were, counter-intuitively, often getting worse for women in Hollywood. Where might we look, then, for optimism in this decade? As the studio system fell apart, one megastar, Elizabeth Taylor, was able to use its weaknesses to her advantage and become a power player in her own right. She is intriguing as an economic actor of the late studio system.
Elizabeth Taylor was born in 1932, part of the last generation tied to the old-style indentured servitude of the studio system. Taylor had gone from an MGM child star of the 1940s to the most marketable sex symbol in the world. And yet, Taylor’s inexperience with structural abuses such as the “casting couch” demonstrates how certain women (particularly valuable young stars) were shielded from the grim realities facing most working actresses. Taylor wrote in her 1964 memoir, “I was too young to know why all of a sudden a young woman would be blackballed and never heard of again. Evidently, that casting couch bit did happen. Of course, I never even heard about it until years later,” (Taylor 12).
Taylor’s long history with the system and personal wit make her a particularly interesting actress/truth-teller. Of her career trajectory, she once said: “To do National Velvet I signed a contract with MGM—and became their chattel until I did Cleopatra” (Ibid.). Taylor was irreverent about the upbringing she received at MGM, noting how people called MGM’s main management building “The Iron Lung”: “You know, the executives tell you just how to breathe,” (Taylor 14). She was equally scathing about the lack of bodily autonomy and the policy of suspensions if a woman chose to have a child, noting wryly that “every time I got pregnant, kind-hearted old MGM would put me on suspension,” (Taylor 45). Taylor scornfully called MGM head Louis B. Mayer “Big Daddy,” mocking a man who told all his employees they were his children and forced them to celebrate his birthday (Taylor 15). As the studios disintegrated, Taylor became an increasingly savvy negotiator. In one dispute, she yelled at Mayer and told him to go to hell. She never went back to his office. This pattern of rebellious independence culminated in her finally becoming the first woman to earn a million-dollar salary (Taylor).
Elizabeth Taylor, who had risen from MGM child star to be the first actress paid a one million dollar salary for a role, waits between takes on the Rome set of Cleopatra in 1962.
Taylor’s personal life scandals of divorces, affairs, and “husband-stealing” (which began in the later years of the 1950s) had made her infamous by the 1960s—yet more popular than ever. The studios understood that as long as Taylor played sexy and sinister opposite an innocent blonde girl-next-door, she was worth millions. Taylor’s villainous image actually helped her to become a very wealthy woman. As it turned out, everyone wanted to cash in on the notoriety of Taylor and her affairs. Just as the sexy man-stealing vixen narrative reinforces patriarchal politics, it also reinforces misogynistic capitalism too. Lawrence Harvey, Taylor’s co-star in Butterfield 8, put it bluntly: “The bitch is where the money is,” (Harvey qtd in Walker). In the tabloid press, Taylor became “the black widow” and “the woman you love to hate” (Photoplay 1960).
Taylor’s most massive infidelity scandal, her affair with Richard Burton on the set of Cleopatra in 1962, was connected, in the minds of worried male executives, to the failing studio system itself. In a grave New York Times article about the future of the film business, Taylor and her behavior were specifically decried as a factor in what looked like industry catastrophe: “The current temperamental shenanigans of Elizabeth Taylor during the shooting of Cleopatra in Rome are also regarded with resentment and grave distraction by certain closely involved parties in this town. Little humor is had hereabout from the gag, ‘Liz fiddles while Hollywood burns,’” (New York Times May 6, 1962). The following month, insiders anxiously awaited a response to the gargantuan and troubled Cleopatra (New York Times June 1 1962). With hundreds of millions of dollars on the line, their anxiety demonstrates just how much power Taylor held in late-studio Hollywood. Her position was unlike any actress before.
Having said this, Taylor’s success stands, in many ways, alone. Much of what we’ve learned from actresses of this era, especially those who’ve added their voices to the #MeToo conversation, confirms that the 1960s was not a liberating era for women in Hollywood. Sexual exploitation, if somewhat repackaged, remained rampant.
Even as the studios collapsed, billions were made on the sexualizing of the industry and its actresses. Once again, it was men making money off of women’s bodies—just new men, pushing new, looser, sexual mores. Now, if an actress didn’t want to go along with a casting couch quid pro quo, she would be deemed “square” or “conservative.” Clothes grew skimpier, roles demanded more sex and nudity, and misogynistic advertising and pornography crept into the public space. All of this was part of very deliberate efforts by men to monetize the idea that ever more present sex was “modern” and “liberating.”
No one is more tied to the phony, exploitative, misogynist, capitalist side of the 1960s “sexual revolution” than Hugh Hefner and the rise of his Playboy empire (Valenti). It is not coincidental that the ideal woman became a “sex kitten,” or that a pornographer like Hefner could rise to the height of glamorous sophistication with an empire built on the dehumanization and exploitation of women. It would be remiss not to connect Hefner, his practices and legacy, with the earlier discussion of Marilyn Monroe and the abuses that brought about her untimely death. It is ultimately fitting, in ways this series has pointed out, that the young, no-name Hefner put his name on the map by selling nude photos of Monroe (a star he did not know) against her will (Kane). In keeping with the sociopathic male capitalist culture that lauded Hefner and his “business empire” until his death in 2017, Hefner gained initial attention through cruel exploitation. Even more fitting, and perfectly emblematic of pathological Hollywood power relations and treatment of women, is how Hefner stalked Monroe even in death by buying a plot so he would be buried next to her (Kang). The fact that Monroe’s final resting place is next to a stranger who pimped her out for his profit is sickening.
While women around the world were rising up for worker and spousal rights, the cultural imagery of 1960s Los Angeles was the Playboy Mansion and The Valley of the Dolls. Serial rapist Bill Cosby, Hefner’s closest friend, operated with impunity in Hefner’s clubs. As one of the Cosby survivors, PJ Masten, explained in the now-iconic 2015 survivors’ expose in New York magazine, “I told my supervisor at the Playboy Club what [Cosby] did to me, and you know what she said to me? She said: ‘You do know that that’s Hefner’s best friend, right?’ I said, ‘Yes.’ She says to me: ‘Nobody’s going to believe you. I suggest you shut your mouth,’” (Masten qtd. In Malone).
Hugh Hefner (1926-2017), celebrated for his business acumen and role in the “sexual revolution,” poses with the first issue of his Playboy magazine that featured unauthorized nude photos of Marilyn Monroe.
The 1960s “Bunny,” “Playmate,” and starlet culture depended upon women arriving in Hollywood in droves, willing to be objectified. Not since the 1920s was sexual exploitation so par for the course. The axiom that “if you won’t do it, there are a hundred girls waiting behind you who will” accelerated to extremes in the 1960s. As Danae Clark has written of actresses and particularly applicable to this era, “They must be induced to ‘live’ their exploitation and oppression in such a way that they do not experience or represent to themselves their position as one in which they are exploited” (Clark 21).
While women in other parts of the country made intellectually-driven gains in feminism, creating effective campaigns for changes in employment practices and developing the first domestic violence shelters and rape crisis centers, Los Angeles remained dominated by Hollywood and its calcified, dangerous views of women as sexualized bodies and commodities. As a result, a feminist awakening simply did not happen in Hollywood for decades. One of the reasons the #MeToo moment of 2017 was so earth-shattering and monumental was because it was finally Hollywood’s feminist moment, fifty years overdue.
But even in the country’s most masculinist-objectifying industry, which depended upon women’s continued acquiescence for its perpetuation, there were tiny acts of rebellion and strategy that predated #MeToo and Time’s Up. Studying the current 2016 groundswell of truth-telling by older women around their victimization by Cosby from the 1960s to the 1990s, for example, is an excellent case in point. What are now called “whisper networks” on social media, in which women warn women about predatory men, preceded the digital era. In the 1960s, the women who worked in the nightclubs and Playboy Clubs would warn one another to watch out for certain men, as women have been doing for centuries. It is all the more fitting, then, that in the digital age, these women are able to form a community across race and geography to help and warn one another again—and to finally have their #MeToo moment after three, four, or five decades. As Masten says, “I started getting private messages on Facebook from other former Bunnies: ‘He did me too, PJ. He got me too.’ There’s a couple of websites, ‘We believe the women,’ and Cosby sites too that we all created. And we talk, all the survivors,” (Masten qtd in Malone).
To conclude, the 1960s is a particularly sad decade for the history of women in Hollywood. While the rest of the world was opening up to women’s liberation, feminism failed to remake Hollywood. In fact, as women were reaching equality on economic and political fronts in other areas, American film even instead saw new heights of violent and pornographic misogyny. The decade saw a new, young generation of independent male filmmakers who mainstreamed rape and murder scenes and porn aesthetics under the guise of youth rebellion. As Brian De Palma remarked cheerfully on the makeup of the slasher film as mere genre convention, “I don’t particularly want to chop up women, but it seems to work,” (Knapp qtd De Palma ix). The rise of woman-mutilating and killing in film at the same moment of ascendant women’s liberation in society is something that theorists of feminist gains and backlash like Naomi Wolf or Susan Faludi would consider no coincidence.
Further, the 1960s represent a separate failure, a missed opportunity on the economic front. The crumbling of the old, all-powerful studio system should have opened a door for women to finally take on more professional power and voice in the system. Instead, new models—the rise of Hefner’s empire and the explosion of the porn industry, to name a few—merely reproduced white, misogynistic hegemony.
In the next article, the series conclusion, we will end with a note of hope as to how the injustices that accelerated from the 1960s onward were, in some ways, finally brought to a rather abrupt halt in 2017. We will look to manifestos and marches but also to new understandings of history, economics, and institutions that may finally bring about permanent change in Hollywood.
“Elizabeth Taylor’s public image is subject of filmmakers’ study.” New York Times 1 June 1962: n. pag.
Faludi, Susan. Backlash: The Undeclared War against American Women. New York: Crown, 1991.
“Fox says Taylor reissues are not a popularity test.” New York Times, June 2, 1962.
“How much more can Liz take?” Photoplay, 1960. n. Pag.
“Jacqueline Kennedy vs. Elizabeth Taylor – America’s two queens! A comparison of their day and nights! How they raise their children! How they treat their men!” Photoplay. June 1962: n. pag.
Kane, Vivian. “Hugh Hefner Is Still Exploiting Marilyn Monroe, Even In Death.” The Mary Sue. September 28, 2017.
Kang, Biba. “Hugh Hefner Has Immortalized Himself As A Disgusting Creep By Getting Buried Next To Marilyn Monroe.” The Independent. September 29, 2017.
Kelley, Kitty. Elizabeth Taylor, the Last Star. New York: Simon and Schuster, 1981.
“Liz’ butler tells everything he saw.” Photoplay, July 1962.
“Liz screams! Mob beats up Burton!” Photoplay, April 1963.
“Liz Taylor: does God always punish?” Photoplay, April 1960.
Malone, Noreen. “I’m No Longer Afraid’: 35 Women Tell Their Stories About Being Assaulted by Bill Cosby, and the Culture That Wouldn’t Listen.” New York, July 26, 2015.
“Miss Taylor Is Chided: Vatican Weekly Questions Her Right to Adopted Girl.” New York Times, April 13, 1962.
Oglethorpe, Tim. “Hitchcock? He was a psycho: As a TV drama reveals his sadistic abuse, Birds star Tippi Hedren tells how the director turned into a sexual predator who tried to destroy her.” Daily Mail Online. December 21st, 2012.
“Out of the past, into the future with Miss Taylor.” New York Times, April 12, 1964.
Kerry McElroy is a feminist film historian and doctoral candidate at Concordia University, Montreal. Her thesis, entitled Class Acts: A Socio-Cultural History of Women, Labour, and Migration in Hollywood, focuses on the actress as working class subject and includes fieldwork interviews she conducted with women in Hollywood across professions. Her latest publication is an upcoming book chapter focused on the actress as activist that examines Louise Brooks, Amber Tamblyn, and the Bill Cosby accusers. She holds master’s degrees from Columbia and Carnegie Mellon Universities. | Menu ▼
Essays
“There Were No Laws Against It Then”
Abuse, Stardom, and the False Promise of Sexual Revolution in 1960s Hollywood
Tippi Hedren (1930-) was subjected to extraordinary physical and metal abuse and poor labor conditions by legendary director Alfred Hitchcock on the 1962 set of The Birds.
Kerry McElroy writes about Elizabeth Taylor, Hugh Hefner, and other icons of mid-century culture for this sixth series installment of Bette, Marilyn, and #MeToo: What Studio-Era Actresses Can Teach Us About Economics and Rebellion, Post-Weinstein.
The 1950s was the era of the frustrated housewife and The Feminine Mystique in American life. By the early 1960s, the social tumult that had been simmering below the surface finally began to boil over for women, minorities, and in anti-Vietnam protest. But with so much change in the air, how did American corporate male interests respond? Further, in an era when women were beginning to throw out their girdles, read feminist theory, and join consciousness-raising movements, what became of the most objectifying business of all—Hollywood? As has happened over and over again in Hollywood history, things didn’t go as we might expect. The 1960s represents, like previous decades, a lost opportunity for women as regards equality in the film industry.
One particularly apt late studio-era Hollywood dynamic that demonstrated mistreatment as still the norm was the relationship between director and actress. The quintessential 1960s director, Alfred Hitchcock, had a typical albeit extreme version of the attitude that actors were annoyances—bodies to stage, voices to recite lines—who got in the way of a director’s vision and genius. When it came to gender, he once said, “‘I always believed in following the advice…‘Torture the women!’” (Spoto xix). Tippi Hedren, his most famous leading lady, felt the brunt of his misconduct.
| no |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://www.heritage.org/political-process/report/liberalism-radicalized-the-sexual-revolution-multiculturalism-and-the-rise | Liberalism Radicalized: The Sexual Revolution, Multiculturalism ... | Liberalism Radicalized: The Sexual Revolution, Multiculturalism, and the Rise of Identity Politics
In the past two decades, a new, more radical form of progressivism has taken over American social and political life, even finding its way into the White House. Fresh instances of this new progressivism appear every day. For example:
At the 2012 Democratic National Convention, progressives officially supported same-sex marriage as a civil right and unofficially rejected the word God in their platform;
President Barack Obama, labeled the “First Gay President” by Newsweek for his support of gay rights, has instructed the Attorney General of the United States not to defend the Defense of Marriage Act; and
Vice President Joe Biden has said that discrimination against transgendered persons is the “civil rights issue of our time.”[1]
The new progressivism divides Americans into categories of race, class, and gender. It renews the specter of race conflict by rejecting the goal of civil rights, in which individuals achieve equality under the law; instead, the goal is political racial solidarity against what is viewed as an inherently racist American system.
As a former law professor, Obama has been associated with the movement called Critical Race Theory, which—according to a proponent—“seeks to highlight the ways in which the law is not neutral and objective, but designed to support White supremacy and the subordination of people of color.”[2] Race politics has taken center stage, with both political parties vying for the loyalty of the growing number of Hispanic Americans. Obama attributed his recent presidential victory to the “Latino community,” while the Republican Party, admitting that it is “too old, too white,” scrambles to court the Latino vote.[3]
Finally, the politics of gender has grown as 55 percent of women voted for Obama in 2012.[4] Rallying around the Affordable Care Act, progressives accused those who opposed the new right to taxpayer-funded contraception of waging a “war on women.”
This is not the old progressivism of 1910, nor is it the self-styled “liberalism” of the 1940s and ’50s. The term “liberals” here refers to what many in the Democratic Party and American society called themselves between 1948 and 1969. These were the heirs to the early 20th century Progressives. Economically, these are the liberals of the generation that came of age during World War II: unionized blue-collar laborers and farmers.
In 1949, historian Arthur Schlesinger,Jr., defined economic liberalism as “democratic, regulated capitalism—the mixed society.”[5] He believed that liberals were the pragmatic “vital center” between the opposing dogmatisms of conservatives like Robert Taft, who wished to repeal the programs of the New Deal, and the new progressives, who challenged Harry Truman within the Democrat Party and ran Henry Wallace as a third-party presidential candidate in 1948.
These liberals of the center were an intensely patriotic group. They supported the Cold War because they thought Communism was just as bad as fascism. Truman fired Secretary of Commerce Wallace for his sympathy toward the Soviet Union, purged the new progressives from the Democratic Party, and made bureaucrats swear loyalty oaths. These liberals found common ground with Republican President Dwight Eisenhower, who called his platform “dynamic conservatism” because it combined fiscal conservatism and anti-Communism with an acceptance of the New Deal programs. Given their progressive roots, these liberals embraced big government.
Liberals were also socially and morally conservative: Roman Catholics and mainline Protestants with big families, bigger cars, and, increasingly, homes in the suburbs, where they watched Father Knows Best, I Love Lucy, and Gunsmoke. Culturally, the difference between liberals and old progressives, on the one hand, and neo-progressives, on the other, is obvious at a visceral level: One can’t imagine Woodrow Wilson, Harry Truman, or Lyndon Johnson chanting “om” with Allen Ginsberg at the 1967 Human Be-In, dropping acid with Timothy Leary, or inviting Jay-Z to the White House.
These old liberals did not disappear—in fact, they are today’s neoconservatives. Irving Kristol, Michael Novak, David Horowitz, Richard Perle, and Norman Podhoretz briefly supported the radicalism of the 1960s, and when they forsook their Leftist radicalism to return to the fold of 1950s liberalism, they called themselves “paleo-liberals.” Progressive Michael Harrington derisively called them “neo-conservatives” in 1973. In Kristol’s famous formulation, a neoconservative was “a liberal who has been mugged by reality”[6]—but a liberal nevertheless. This is why today’s neo-progressives, when they doubt Obama’s radical credentials, frequently call him a “neo-liberal.”[7]
The New Left, the political movement that grew out of neo-progressivism, transformed American politics. That transformation was a partial rejection in practice and a total rejection in theory of the principles and policies on which the 1950s self-styled liberals had risen to power and claimed victory in World War II.
That is not to say, however, that there was no connection between neo-progressivism and the earlier progressive and liberal movements in America. Neo-progressivism was a continuation of progressivism and liberalism in that it rejected the Founders’ teachings on natural rights, limited government, and constitutionalism.[8] And while there was a vast difference of both ends and means between the goals of LBJ’s Great Society and the neo-progressive radicals, the early Progressives to a certain extent did pave the way for both through their withering critique of the old order inherited from the Founding and their embrace of “progress” in both political and cultural terms.
The New Left combined what they called personal politics (the idea that American citizens have a right to all forms of self-expression) and cultural politics (the idea that cultural groups are entitled to special status) together as the twin pillars of a new identity politics. In the first, citizens today have more, not less, freedom from government in the realm of sexual expression; in the second, neo-progressives fractured the American electorate into various groups: the 1 percent, the 99 percent, the African American “community,” the Hispanic “community,” the white male vote, the white female vote, etc. These insular groups were no longer to be assimilated into a common American culture; they were to be given special status as oppressed or oppressor groups in a larger, more hostile view of the Western tradition. This view, commonly narrated in school textbooks, places America, Christianity, and capitalism at the vanguard of a colonial, exploitative, racist, sexist, homophobic imperialism.
The clear goal of the sexual revolution and the politics of race, class, and gender was to oppose the American liberal establishment and bring about a new kind of society founded upon a new standard of right. The personal politics of the New Left was intended to deconstruct the old liberal, progressive order to allow for a return to nature that would promote happiness and personal fulfillment in contemporary America. This, of course, meant something wholly different from the earlier conception of nature in which the Declaration of Independence, with its appeal to the “Laws of Nature and of Nature’s God,” was grounded.
This essay will return to the origins of neo-progressivism, which emerged in the 1950s as a revolt against liberalism across almost all academic fields. Two of those fields, psychology and sociology, provided the theories for sexual revolution and multiculturalism that would mobilize the New Left and so dominate and motivate liberal theory and politics today.
The Sexual Revolution
Free-love movements in America go back to the mid-19th century. The first sexual revolution, which began in the 1920s and was associated with the Progressive thinkers of the time, was confined to small bohemian groups, literati, and radical psychoanalysts that gathered in places like Greenwich Village. While it began to undermine the old moral order, it did not penetrate the mainstream as the sexual revolution of the 1960s did.
Some of the key thinkers behind the sexual revolution in both the 1920s and the 1960s can be traced to Freudo-Marxism within the field of psychology. Freudo-Marxist thinkers posited that American capitalism was akin to a disease and that the destruction of capitalism required the destruction of the moral underpinning that sustained it. Ironically, the Freudo-Marxists rejected fundamental teachings of both Marx and Freud. They abandoned crucial Marxian concepts: the labor theory of value, the rejection of private property, historical materialism, and the idea that mind is the byproduct of the mode of production. They also abandoned Freud’s theory of sublimation.
Sigmund Freud, the Austrian neurologist who founded psychoanalysis, had taught that the foundation of a civilization and its citizens’ ability to reason was an education in moral asceticism, or the renunciation of one’s instincts. The human being is initially controlled by a desire for pleasure—called the pleasure principle—and only by painful necessity does he adopt the reality principle, in which reason mediates between the impulse to pleasure-seeking and the reality that only some pleasures are attainable and compatible with civilization.
Freud described the trade-off in Civilization and its Discontents (1930): Repression—the thwarting of sexual desire—which made all human beings to a certain degree neurotic, was simultaneously the foundation of civilization, in which neurotics channel their nervous energy into other pursuits such as art and science. Self-denial was, in this understanding, the necessary basis for the higher pleasures of educated society.
Freud’s teachings appealed to liberals, who were interested in freedom from economic necessities and not sexual liberation. They wanted economic, not sexual, reforms and adopted Freud’s teachings as the best defense of the American economy and sexual morality. Freud’s nephew, Edward Bernays, who worked as a propagandist in the Wilson Administration, used Freud’s theory to justify state capitalism: Americans’ natural aggression could be channeled by advertisements toward consumerism, which he called “propaganda for peace.” People who were obsessed with buying things might be less inclined to fight wars. Corporations employed psychoanalysts to create advertisements that titillated their viewers’ sexuality and turned their unconscious sexual desires toward various products.
Liberals in the 1950s, appealing to Freud, openly taught sexual gratification, often in campy sex education videos, but still within marriage and traditional sex roles. Sex within romantic marriage would diminish neuroses. Sexual morality was grounded on the premise that sex was higher, or more “human,” when associated with duty. Without this sense of duty, they believed, humans abandoned reason and were led by pleasure itself to pre-marital sex, promiscuity, and adultery.
Wilhelm Reich and Sexual Liberation. The father of the modern sexual revolution in the U.S. was dissident psychoanalyst Wilhelm Reich (1897–1957). He participated in the sexual revolution of the 1920s, and his teachings inspired the counterculture in the 1950s and 1960s. Reich is ubiquitous among the works of the Beat writers Allen Ginsberg, Jack Kerouac, and William Burroughs; acclaimed authors J.D. Salinger, Norman Mailer, and Saul Bellow; and even actor and director Jack Nicholson.
Most significant was his influence on Paul Goodman, whom Dan Rather called “the guru of the New Left.” Goodman, an openly bisexual liberationist who underwent “Reichian analysis,” was one of Reich’s earliest American supporters. He founded gestalt psychotherapy and offered Reich’s ideas to a popular audience as a cure for the sexual suppression of liberalism. He became one of the most influential writers for the student radicals of the 1960s. His works, among them Growing Up Absurd (1960), were the most widely read in the Berkeley Free Speech Movement.
Reich was a proud, forceful man, a medical doctor with an incredible eye for detail. As a young man, he saw death on the grisly Italian front in World War I; though he despised war, he claimed it had given him a sense of heroic destiny, and he returned home a committed socialist. Maturing in 1920s Vienna, where he attended his first psychoanalytic seminars and studied under Freud, he combined his socialism with its libertine culture. He chafed at the psychoanalysts’ sexual conservatism as the older generation frowned upon his sexual indiscretions. He frequently cheated on his wife, a psychoanalyst herself.
Reich reveled in satisfying his natural desires and wished to free others to enjoy a similar freedom. “Sexuality,” he wrote, “is the center around which the life of society as a whole as well as the inner intellectual world of the individual revolves.”[9] The enemy of natural freedom, he believed, was religious and political strictures that led to shame, guilt, and jealousy. In 1929, he founded the Socialist Association for Sex Hygiene and Sexological Research; riding around in a van, he procured illegal abortions for girls with unwanted pregnancies, gave out contraception, and encouraged premarital sex.
In 1933, he was expelled from both the International Psychoanalytic Association and the Communist Party. With the rise of Nazism in Europe, in 1939, Reich’s followers in the United States secured him a visa and a lectureship at the New School for Social Research in New York. Setting up psychotherapeutic practice, Reich continued to have a devoted following. An experimentalist, he believed that he had discovered a new physical energy, which he called orgone, the material correlate to Freud’s sexual energy of libido. Losing interest in psychotherapy, he created therapies to release the flow of this cosmic energy. He spent the remaining years of his life, often in isolation, performing experiments to better understand it.
Reich built great boxes, called accumulators, in which patients could reabsorb their expended orgone and cloud-busters to unclog pockets of orgone in the atmosphere. (Some blueberry farmers once paid him to induce rain.) Reich stressed the implications of his discovery for national defense against both “red fascists” and UFOs (not to mention its implications for energy independence), but the United States Food and Drug Administration was not enthusiastic. After high-profile articles in Harper’s and TheNew Republic about the “growing Reich cult” that surrounded his sexual theory and orgone experiments, the FDA indicted him for transporting orgone accumulators across state lines. A jury found him guilty of fraud, and the judge ordered that his accumulators be smashed and his books burnt. Convicted as a fraudulent quack, Reich died in 1957 in a federal penitentiary.
For this reason, later psychologists tended to caricature his ideas and distance themselves from his work. However, the core of Reich’s social theory was quite persuasive to many. Reich’s central idea, his rejection of genital repression and his proposal that sexual liberation destroys the morals underlying capitalism, was repeated by leading thinkers like Paul Goodman, Herbert Marcuse, and Norman O. Brown. In 1964, Time magazine recognized Reich’s influence:[10]
Dr. Wilhelm Reich may have been a prophet. For now it sometimes seems that all America is one big orgone box…. From innumerable screens and stages, posters and pages, it flashes larger-than-life-sized images of sex…. Gradually, the belief spread that repression, not license, was the great evil, and that sexual matters belonged in the realm of science, not morals.
Reich’s eccentricity was matched by a certain intellectual brilliance and a broad willingness to entertain unconventional opinions. He founded character analysis, an entirely new field of psychoanalytic study that analyzed neurotic characters, not neurotic symptoms, meaning that it viewed certain types of human beings as ill. He extended his practice beyond individual therapy, seeking answers in social organization for the pathologies that he witnessed in the clinic.
Among the ill character types, one most threatened society: “mass man,” whose character was the basis, he argued, of fascism. The fascist possessed “a sado-masochistic character” and, fearing his own political freedom—and pleasure—turned to dictatorial tyrants. The United States, he asserted, was not far behind Nazi Germany. The root of Americans’ self-denial lay in their capitalist society’s rejection of the true concept of human nature. Reich argued that sexual repression, formerly viewed as essential to all civilization, creates and exacerbates the very neuroses Freud had claimed to ameliorate; indeed, he claimed, the greatest human sickness is morality.
Reich and the Freudian revisionists argued that there were “laws of nature” and a natural right that could be discovered by human reason. A return to the study of this nature could reveal how to ameliorate human problems. Rejecting “the relativistic view,” revisionist Erich Fromm wrote: “It is the task of the ‘science of man’ to arrive eventually at a correct description of what deserves to be called human nature.”[11]
Beginning from the position that what is pleasing is natural while self-denial is educated by convention, Reich posited an innate biological growth in humans that had been repressed for political purposes. The naturally pleasing included food, drink, warmth, sex, and the seed of science—curiosity, or pleasure in knowing. Upon these basic needs, various higher activities naturally developed; technology, for one, develops in the service of these needs.
The pleasurable life was incompatible with the moral, which was “antithetical to nature.”[12] It was free from pangs of duty, which were internalized in the human conscience and in a sense of honor. In his writings, Reich provides an intense criticism of what he calls “compulsive morality” and the religions used by political regimes to inculcate it. For Reich, sadism—which included aggression focused back upon oneself or upon others—unfortunately had been the underpinning of all human relationships since the beginning of organized political societies.
The very habits of civilized males, who defer chivalrously to women, he regarded as inseparable from their unconscious belief that women are inferior. Hence, the moment when men feel they have acted most honorably is precisely when they have displayed their domineering desire and carved out a realm of the “masculine.” Reich realized that chivalry was not conscious; it was a habit educated first in the differentiation of the sexes—a moral distinction inculcated by the patriarchal family and supported by a political culture in which men and women are given sexual roles. A scientific psychology, he believed, would make patients conscious of the nonsense of morality and its internalized guilt.
But Reich found in his private practice that revealing to patients the logical and historical origins of their guilt did not work. Freud, he concluded, had falsely divided words and deeds: Patients in the clinic reflected upon their moral inhibitions without discarding their reserved habits. Reich believed that they used logic, words detached from emotion, as a defense mechanism for their still-ingrained morality. The patient might talk freely about sex and morality as if he possessed no guilt or shame, but in his physical behavior, he retained a “character armor” of the same moral inhibitions.[13] The disease of morality, like a virus, lay tucked away, obscured by philosophic jargon.
Because morality was embedded in habit, it would have to be removed by new habits. Hence, Reich constructed a revolutionary private therapy that focused on both acknowledging and acting out psychic tensions to remove, layer by layer, the armor of guilt and shame that had been established as part of the moral education. Reich focused on sex because it was the core of the entire character structure. When the process was completed, the successful patient would be “genitally potent,” meaning spontaneous and without inhibitions.
Reich was the first to combine Freud and Marx in a new revolutionary dialectic. He believed that conventional morality, as a “plague,” had become so dangerous politically in fascism and state capitalism that it threatened human existence. To protect those living the healthy pleasurable existence and to preserve humanity from the sadism of morality, he constructed a utopian political program called “natural work democracy” to attack morality at its core.[14] In this utopia, the patriarchal family, which represses sexuality, is replaced by the “natural family,” which liberates its members from sexual constraints and cultivates that which is pleasurable.[15]
These sexually liberated citizens would demand new “genital rights,” among which were the abolition of laws against abortion and homosexuality, the reform of marriage and divorce laws, free birth control advice and contraception, and the abolition of laws preventing sex education.[16]
To this Reich added other teachings, such as instruction in masturbation, the right to “extramarital sexual intercourse,” and the “right of the unmarried woman to have a partner.”[17] Reich mocked the hypocritical liberal who advocated sexual education for his daughter yet frowned on her sexual pursuits. He explained:[18]
[T]he girl does not merely need to be free genitally; she also needs privacy, a means of contraception, a sexually potent friend who is capable of love…, understanding parents, and a sex-affirmative social atmosphere—all the more so if her financial means of breaking through the social barriers against adolescent sexual activity are minimal.
The family, however, he viewed as the destructive institutional tool of a broader social and sadistic morality: the morality of capitalism. To destroy capitalism, Reich posited that the old socialists’ logical arguments about economic exploitation were insufficient; one must destroy the moral habits upon which capitalism is founded, such as self-restraint, industry, frugality, and punctuality. Hence the Reichian dialectic: Sexual repression was intertwined with economic exploitation, and sexual liberation would destroy the basis for capitalism.
The sexually liberated individual would never again work a demeaning job that bored him; he would seek the equivalent of the good orgasm in all aspects of life—for example, by creativity in labor. He would demand the redistributive goods that his conscience formerly prohibited him from demanding. Reich wrote of his patients:[19]
Quite spontaneously, patients began to feel the moralistic attitudes of the environment as something alien and peculiar…. Their attitude toward their work changed. If, until then, they had worked mechanically…now they became discriminating [and] were stirred by a need to engage in some practical work in which they could take a personal interest. If the work which they performed was such that it was capable of absorbing their interests, they blossomed. If however, their work was of a mechanical nature as, for example, that of an office employee, businessman, or middle attorney, then it became an almost intolerable burden. In other cases, there was a complete breakdown in work when the patient became capable of genital gratification…. It turned out [they] were always patients who had, until then, performed their work on the basis of a compulsive sense of duty, at the expense of the inner desires they had repudiated.
Reich did not believe that there could be an end to all repression, but he did believe that humans could eliminate much of it. Once human beings were freed from toil and able to indulge in what other Freudian revisionists called “polymorphous perversity”—a life of celebrating pleasure in all of its forms—they would refuse to return to the drudgery of their old jobs. They would demand the means to self-fulfillment as a privilege of citizenship.
Herbert Marcuse, the Humanists, and the 1960s Counterculture. Herbert Marcuse (1898–1979), a member of the Freudo-Marxist Frankfurt School and professor of political philosophy at Columbia, Harvard, Brandeis, and the University of California San Diego, renewed the question of eliminating repression in Eros and Civilization (1955). He applied his theory to politics in a trenchant critique of capitalist society entitled One-Dimensional Man (1964), which sold over 300,000 copies—a best-seller by academic standards. Journalists called him the “Father of the New Left” because of his immense popularity among student radicals.
Marcuse wrote a cursory critique of Reich, but a careful study reveals considerable similarity between the two. A soft-spoken philosopher and émigré from Nazi Germany, Marcuse rejected Freud’s and what he considered the whole of Western philosophy’s characterization of reason as something that “subdues the instincts.”[20] This, he thought, was the moralistic view of reason as the inhibitor of desire, which consequently divides the human person against itself. Rather, Marcuse argued that the philosophic life, or Reason properly speaking, was itself a life of desire. This life of Eros harmonized and unified the soul and therefore constituted the proper end of man. In Marcuse’s own words, “the things of nature become free to be what they are. But to be what they are they depend on the erotic attitude: they receive their telos only in it.”[21]
Marcuse heralded a new society to accompany his philosophic teaching. Historically, repression was needed because man faced necessity; political regimes, including the modern capitalist society, had been constructed upon moral teachings that erected a severe conscience: the self-denial required for industrial production. But now these virtues, which had been inculcated to solve the economic problem, were no longer necessary; indeed, he claimed that they intensified human aggression and thereby posed a threat to society.
Marcuse sought a progressive revolution to end what he called “surplus repression” and bring about the “aesthetic state”—something akin to European socialism.[22] “Polymorphous sexuality” would be liberated at the expense of the capitalist work ethic. The workday would be dramatically shortened, and individuals would choose their work, viewing it more as play. Modern man would accept a lower standard of living in return for the pleasures of instinctual gratification. He would fully detach sex from monogamy and reproduction and completely accept what he formerly viewed as sexual perversion.
In the progressive society, the “sadism” of traditional morality would be viewed as a perversion of human nature. Marcuse claimed that sadism could be removed in the fully erotic person: “Being is experienced as gratification, which unites man and nature so that the fulfillment of man is at the same time the fulfillment, without violence, of nature.”[23] The human body in its entirety—indeed, the whole human personality—would be viewed as an instrument of desire and pleasure.
Marcuse was not alone; Reich’s revolt was followed by other former psychoanalysts, who called themselves Humanists. One of their leading lights was Abraham Maslow, who advocated a return to a study of what was right by nature: “It is possible to study this inner nature scientifically and to discover what it is like—not invent—discover.”[24]
Maslow argued that a close study of natural human development could be the basis for an ethical psychology; hence, it was the nature of an individual, not moral principles, that set the parameters for self-actualization. “Intrinsic guilt,” he wrote, “is the betrayal of one’s own inner nature or self, a turning off the path to self-actualization.”[25] Self-actualization includes the achievement of peak experiences, which should be fostered and not limited by society. Although Maslow came to loathe what he called the “cultural & ethical relativism” of the 1960s, it was he who had written that sex was for most people “one of the easiest ways of getting peak experiences.”[26] While liberals defended the old morality as socially necessary, the Humanists argued that it now posed too great a danger to mankind because of the new technologies of destruction: Historically, those who secretly loathed human nature had turned to political-religious crusades to change it. Hence, the Humanists encouraged a political program to overturn the proposed institutions of repression: the nuclear family and conventional sexual mores.
They espoused socialism, the ideal regime for the pleasurable existence as it provides material goods—food, clothing, and shelter—and also the conditions for higher pleasures. New positive, political rights would be logically grounded in a new progressive framework that would give individuals the choices that allow them to actualize themselves within the realm of their possibilities so as to allow each individual to flower to his unique potential. This they called authenticity.
There are limits to these choices; even the Humanists regarded traditional sadism as unnatural and believed that violent offenders must be incarcerated. On the other hand, the sadism of asceticism must be removed by public education and government-subsidized therapy. The most common form of sadism is the construction of the idea of two distinct genders, a social imposition that limits personal growth by confining it within traditional gender roles. A healthy society, said the Humanists, would then recognize the many unique manifestations of erotic desire and grant sexual rights to its citizens to explore and express their discovered gender identities.
Humanism is imitated by a vulgar version: the teaching of self-creation that often results in various eccentricities, sexual promiscuity, and cultivated absurd behaviors. Still, having to choose between vulgar systems, the Humanists favored those over the liberals’ repression. Taking sides against the middle class in the culture wars of the 1950s, psychologists wrote popular books on sex to attack the old morality.
Leading psychologists and countercultural icons called American culture fascist, or sexually repressive, and Reich’s sexual liberation became the measure of the healthy society. Ginsberg and Kerouac, looking for a more authentic existence, turned away from American middle-class conformity to what they claimed was the healthier African American culture. The Beats imitated jazz—the very word slang for sex—in a new, spontaneous lifestyle and a new kind of writing. Kerouac, who featured Reich in On the Road, writes, “At lilac evening I walked…wishing I were a Negro, feeling that the best the white world had offered was not enough ecstasy for me, not enough life, joy, kicks, darkness, music, not enough night.”[27] Ginsberg eulogized in Howl: “I saw the best minds of my generation destroyed by madness, starving hysterical naked, dragging themselves through the negro streets at dawn looking for an angry fix.”[28] In the growing counterculture, minority cultures were said to be superior precisely because, in contrast to white American culture, they celebrated “authentic” personalities.
In the 1960s, the counterculture went mainstream. Self-acceptance was embodied in songs and slogans like “Be True to Yourself” or “Follow Your Heart.” One could be false to oneself, or inauthentic, only if he desired what others told him he ought to desire. Hugh Hefner published the “Playboy Philosophy,” urging the liberation of sexual desire without guilt, and had his own variety/talk show featuring American celebrities. Helen Gurley Brown, in the bestseller Sex and the Single Girl (1962), rejected the idea of guilt for premarital sex.
College students would capture this new aesthetic of freedom in pithy slogans: “Make Love, not War”; “If it feels good, do it”; “Go With the Flow.” Reich’s influence on the New Left in West Germany was unparalleled. Protesting students scrawled slogans in graffiti: “Read Wilhelm Reich and act accordingly.”[29] In 1968 in Paris, student demonstrators threw copies of Reich’s books at police as the agents of sexual repression.
The Sociological Critique of Liberalism
Besides the psychological and psychiatric source in the sexual revolution, the second pillar of neo-progressivism—the politics of race, class, and gender—can be traced to the teachings of sociologist C. Wright Mills (1916–1962) on personal and cultural politics. While these movements have led to bigger, more intrusive government centralization, their original purpose was in fact to decentralize the American administrative state and state capitalism by fragmenting the American identity and carving it up into competing groups.
Interestingly, the new sociology approached political questions from a perspective opposite to psychology. While it too recognized a natural individual spontaneity, it ultimately stressed that human biological desires were largely shaped by society; spontaneity could never grow into a rational freedom unless one possessed choices within the social structure. Mills, asking which social organization best allowed individuals to thrive, was most concerned about the diminishing freedom under 1950s state capitalism. Coining the term “New Left,” he defined for future radicals an agenda in opposition to liberalism.
Mills, like Reich, was idiosyncratic, combining physical toughness with mental toughness. As a boy, his family was constantly on the move, and he made few close friends. He left Texas A&M University after his first year (it is rumored he was expelled after a fistfight). Four years later, he graduated from the University of Texas at Austin, where he excelled as an undergraduate, publishing articles in top sociological journals.
As a professor at Columbia University, Mills remained an outsider. He dressed in flannel shirts like one of the Beats, rode a motorcycle, and attacked snooty sociologists for their convoluted theories, which were written in pseudo-scientific gobbledygook so as to confuse the average reader. Scorning the limp, academic niche writers, he used logical rigor to penetrate big topics in stirring books. His writing, said the 1960s radicals, was manly and assertive, unlike the passivity of their well-adjusted white-collar fathers.
Mills’s career centered around a sociological study and critique of American liberalism, which he believed had derailed from its original goal of achieving reason and freedom. “For in our time,” he wrote, “these two values, reason and freedom, are in obvious yet subtle peril.”[30]
The “central goal of Western humanism,” wrote Mills, was “the audacious control by reason of man’s fate.”[31] Liberals had assumed that this goal could be accomplished by efficient bureaucracies, but the new scientific management had actually stunted the individual’s ability to reason and master his own fate. The attainment of true freedom, wrote Mills, here echoing earlier Progressives like John Dewey, would require a radical social reconstruction:[32]
“The kingdom of freedom” of which Marx and the left in general have dreamed involves the mastering of one’s fate. A free society entails the social possibility and the psychological capacity of men to make rational political choices. The sociological theory of character development conceives of man as capable of making such choices only under favorable institutional conditions. It thus leads to an emphasis upon the necessity of changing institutions in order to enlarge man’s capacity to live freely.
This road to freedom required a rejection of the old liberalism. A new social philosophy must be grounded, Mills wrote, “on the assumption that the liberal ethos, as developed in the first two decades of this century by such men as Beard, Dewey, Holmes, is now often irrelevant, and that the Marxian view, popular in the American ’thirties, is now often inadequate” because “they do not enable us to understand what is essential to our time.”[33]
Mills provided a sociological critique of the West. He argued that its theories of economic and intellectual freedom—liberalism and socialism—were passing phases. To usher in a new post-modern epoch, Mills sought to expose the myths of liberalism, replace them with new conceptions of “Reason and Freedom,” and organize a New Left capable of overthrowing state capitalism. Mills led the charge in a sociological assault on American society.
The first myth that Mills attacked was that of middle-class morality. Rugged individualism and the entrepreneurial spirit were “illusions” perpetuated by the state but practiced only by an insignificant class of small businessmen. The old virtues had been replaced by “scientism,” which applied the techniques of control from the physical sciences to human beings. In truth, liberals hated individuality and innovation; what they really loved, the unspoken morality of corporate cubicles, was efficiency: the stuffy air of the boardroom, long-winded meetings, and being nice.
Paul Goodman famously critiqued this “efficiency.” The new service jobs in modern society held no intrinsic importance: They were useless, and capitalistic society was absurd because it promoted uselessness. Young Americans, he claimed, knew the difference between useful work, which could be justified as life-important, and the efficient production of baubles and hamburgers for consumption.
Mills found the morality of efficiency to be even more insidious. He argued that white-collar work was dehumanizing: Workers became “cheerful robots” who only “pretend[ed] interest” in their own work.[34] They were forced to affect, in insincere smiles, that they liked their customers. In the “personality market,” their personalities were mechanized and their spontaneity destroyed.
The nuclear family, wrote Mills and other sociologists such as David Riesman and William H. Whyte, was the instrument of conformity. Riesman wrote of the “despotic walls of the patriarchal family.”[35] The father, the “organization man” who donned a “gray flannel suit,” was stripped of seriousness and hence of authority and virility as well. As presented in Rebel Without a Cause (1955), which starred James Dean, domineering neurotic mothers had taken over, depriving young males of their rite of passage, leaving them confused and turning them to delinquency to prove their manhood.
The capitalistic disruption of the family also led to a denial of feminine sexuality. The frigid mother, detached from the unmanly role of her husband, fled to exotic sexual escapades or alcohol to find the excitement lacking at home. Housewives were stunted humans—Mills called them “darling little slaves”—confined to the prisons of suburban homes.[36] Social life was shaped by the children in a “filiarchy”—or a rule by children—that directed all aspects of life.[37] Going farther, Goodman called suburbanites the “new proletariat,” the servile child-bearers for the state.[38]
Sociologists generally reserved special hatred for the new suburbs, the “apotheosis of pragmatism,” which molded Americans into conformity.[39] Extensions of corporate growth, the suburbs reproduce like a polyp, lumping together large numbers of rootless, interchangeable strangers without any higher collective goal than moneymaking.
In his bestseller The Lonely Crowd (1950), David Riesman wrote that suburbanites, having lost their social institutions, lose on the one hand the necessary socialization for an authoritative sense of self required to resist conformity and, on the other, the traditions against which an autonomous individual derives a sense of purpose.[40] Desperate for community and seeking meaningful ties, the residents grow shallow roots—bridge clubs, canasta, and bowling leagues—that are just enough for the bare minimum of communal life. There is much social activity but little real civic or political activity. Friends are chosen for convenience, and new associations, led by tiny, unspectacular leaders, produce brief, ephemeral traditions. Surrendering to the fleeting opinion of the group, the residents place a premium on “adjustment”; indeed, the best-adjusted are the ones who are constantly adjusting.
Examining the “character structure” of these suburbanites, Riesman announced the decline of the “inner-directed personality,” which follows the demands of conscience, and the rise of the “other-directed personality,” which is anxious to receive the approval of others. Toleration of others becomes the premiere social virtue: Residents are intolerant of those who are not tolerant. But such toleration produces greater conformity because it levels all opinions, leaving nothing sacred.
Mills also attacked the liberal myth of “scientific” rationalization: that greater bureaucracy leads to more rational outcomes; rather, it led to chaotic, irrational policies, such as Mutually Assured Destruction. While the size of bureaucracies increases, it does not correlate to more rational policies or freer individuals. Lost in a rat maze of red tape, citizens take on the superstitions of medieval peasants:[41]
Science, it turns out, is not a technological Second Coming. Universal education may lead to technological idiocy and nationalist provinciality, rather than to the informed and independent intelligence. Rationally organized social arrangements are not necessarily a means of increased freedom—for the individual or for the society. In fact, often they are a means of tyranny and manipulation, a means of expropriating the very chance to reason, the very capacity to act as a free man.
Such tyranny begat tyranny. The abstracted world in which bureaucrats lived, functioned, and related made them capable of the greatest atrocities. American foreign policy only spread the slavery of state capitalism; it exhibited an aggressive expansion akin to other world empires. In the name of anti-Communism, America tyrannized over smaller countries, designating them the “Third World,” and in the name of liberating them exploited their natural resources.
But the greatest myth of all, wrote Mills, was the myth of liberal democracy and pluralism. Liberals argued that America’s pluralist politics balanced interests, safeguarding its people from authoritarian rule, but Mills found only a hierarchical “Power Elite” that manipulated the public through media to maintain the status quo.[42] It commanded the resources of vast, impersonal bureaucratic organizations and tyrannized over its subjects’ lives from afar. It staffed a convoluted bureaucracy with a priesthood of experts, who dissemble the workings of government. It stripped citizens of a sense of power and made of democracy an empty formality: Liberals and conservatives “are now parts of one and the same official line.”[43] Through personality adjustment, it herded children into public education to deprive them of charisma, not to cultivate it. It prevented opposition by monopolizing its subjects’ social and private roles, predicting the formation of new power groups, fragmenting their power bases, and co-opting their identities. It used ever more sophisticated and technological methods of control to atomize and alienate its subjects.
There was little difference, in Mills’s estimation, between the rule of the Power Elite in the United States and the Soviet Union.
The Fragmentation of America. To defeat this tyranny and create a truly free society of informed, rational participants, Mills called for a new political philosophy: In what he called the “Sociological Imagination,” social scientists would lay aside their neutrality and engage in public discourse, as well as criticism, over political issues.[44] Because society is maintained by authority—or the recognition of commonly held values—the new sociologist must create theories that question and weaken the power structure. He must illuminate and solve, not ignore, social problems. Instead of a value neutrality, he helps to create the conditions for a free society.
Mills became this advisor to the political movement that he named the New Left. He supplied the information to reinvigorate radical groups, which would come not from the Power Elite, but from the democratic process itself.
For the democratic process to work, he said, there must be a return to actual, not formal, democracy. Actual democracy requires the formation of new groups or “publics,” each invigorated by belief in its own value system and sustained by its own symbols of authority. Mills looked for new authoritative groups that could revolt against the Power Elite and renew the political process, as the existing groups were part of the corrupt system.
The Old Left, consumed with stale, Marxist philosophy, was demoralized and no longer radical; blue-collar workers had become the tools of government-sponsored unions. Even the word proletariat, seldom used by 1950s socialists, no longer meant solidarity. Liberal class consciousness, especially in relation to minority groups, had become a matter of charity.
Mills next turned to the growing class of white-collar workers for a revolutionary movement, but he found that they were unorganized, dependent upon large bureaucracies, and lacking in class consciousness. Mills needed a new proletariat:[45]
[W]ho is it that is getting fed up? Who is it that is getting disgusted with what Marx called “all the old crap”? Who is it that is thinking and acting in radical ways? All over the world—in the bloc, outside the bloc, and in between—the answer is the same: It is the young intelligentsia.
The young intelligentsia, to create new authoritative communities, must resurrect utopianism. Utopianism, or the creation of the ideal human community in theory, must provide a standard for criticism of the existing one. Hence the neo-progressives’ constant reference, even today, to “community.”
Mills argued that a return to community was necessary to revitalize democracy. The New Left would decentralize, or fragment, the American Establishment into competing values as opposed to interests. Participatory democracy would occur in new communities along two different lines: personal politics and cultural politics.
Personal politics meant a politics that appealed to meaningful personal traits in order to create a group loyalty that would rival loyalty to the old unifying symbols of Americanism. Feminists as a political group, for example, could command the loyalty of individual members by appealing to their individual concerns over reproduction, child care, and career opportunities in order to redefine the traditionally feminine roles of wife and mother. Cultural politics, or multiculturalism, would fragment the American public along ethnic lines. The ultimate goal, wrote Mills, was that these groups take over the technologies of state capitalism and wield them for human ends.
The resolution between these two views, one which argued that government remove itself from personal questions and one that wished to fragment American society into conflicting moral views, was the politics of civil liberties, particularly sexual expression, combined with the individual’s civil rights as a member of a protected “insular minority”: the politics of race, class, and gender.
The Politics of the New Left
Reich’s and Mills’s ideas, in various forms, dominated the cultural and political conversation for the next half-century and still dominate today. They took root politically in the New Left, a movement named by Mills. Todd Gitlin, president of Students for a Democratic Society (SDS) from 1963 to 1964 and today a professor of sociology at Columbia, calls Mills “the most inspiring sociologist of the second half of the twentieth century” and “a guiding knight of radicalism.”[46]
Tom Hayden, Gitlin’s predecessor as SDS president, wrote his master’s thesis on Mills, romantically entitled Radical Nomad. Imitating Mills, Hayden wrote “A Letter to the New (Young) Left” with the goal of creating a radical movement among college students. Hayden’s 1962 Port Huron Statement called for a return to humanist values; the goal, Mills’s conception of freedom, could be achieved through Mills’s idea of a politicization of the personal and cultural. With a sense of urgency, Hayden called for a “reflective working out of a politics anew” and listed the “modern problems”: nuclear war, racism, meaningless work, nationalism, American affluence set against world hunger, overpopulation against limited world resources, and government manipulation against “participative” democracy.[47]
The Great Society’s expansion of government programs in the style of the New Deal was hardly a common ground between liberals and radicals. Rather, it was the focal point of a liberal–radical battle over ideals. It was precisely the methods—a redistributive scheme that entrenched “the Corporate State”—that the radicals attacked.[48] Despite President Lyndon B. Johnson’s pandering, the radicals rejected the Great Society as a duplicitous scheme concocted both to fill the meaningless void of the Affluent Society and to secure the reins of corporate power. The domestic policy programs, they claimed, were essentially a form of graft. Funding for the Elementary and Secondary Education Act, the Housing Act, and the Job Corps seldom went to the poor, and when it did, it was not, as Johnson claimed, a “hand-up,” but a “hand-out.”[49]
Battling the liberals, radicals within the Great Society programs tried to divert their funding to rally and empower new dissident groups in society: to mobilize the poor and ethnic minorities for a new radical politics. Projects included the training of 20 activists by community organizer Saul Alinsky, who promised to go to the poor and “rub raw the sores of discontent,” and LeRoi Jones’s Black Arts Theater, which produced Marxist, black nationalist dramas on the streets of Harlem. Jones wrote, “The Black Artist’s role is to engage in the destruction of America as he knows it. His role is to report and reflect so precisely the nature of the society…[that] white men [will] tremble, curse, and go mad, because they will be drenched in the filth of their evil.”[50] Revolution was the aim: In one of his plays, a parody of the radio-TV Jack Benny Program, Benny’s black valet, Rochester, robs and conquers his white oppressors.
Little wonder, then, that when James Farmer, who launched the 1961 Freedom Rides, proposed an adult literacy program, President Johnson personally axed it and demanded an end to “kooks and sociologists” in the Office of Economic Opportunity[51]
Radicals rejected the Great Society because they rejected its conception of greatness. In his denunciation of the Great Society, Marcuse claimed it was a question of conflicting utopias: the Great Society’s capitalist utopia of ever-increasing expansion in production and technology or the Socialist Society of individuals, freed from a lifestyle of consumption, who choose their own form of labor. The Socialist Society adopts a “new consciousness,” while citizens of the Great Society mistakenly believe that they are free.[52] The policies of the Great Society and the freedom that Americans fought to spread around the globe were, in reality, slavery.
The student radicals saw in the Vietnam War proof that the fight against liberalism was a matter of principle, not policy. The war, writes Gitlin, “was symptomatic of a rotten system or even an irredeemably monstrous civilization.”[53] The American system was poisoned from the roots. Vietnam was a “racist war” waged by “a technologically superior, white-led juggernaut against a largely peasant Asian society.” And it was not just a foreign war; America’s tyranny abroad mirrored its tyranny at home. It was, in Gitlin’s words, a “seamless economic and cultural system characterized by white supremacy, murderous technology, and irresponsible central power devoid of justice.” Heroic revolutionaries were needed to oppose this juggernaut, and the formation of Mills’s revolutionary publics fit well into a Marxist framework.
The concept of a proletariat—an exploited or repressed group—proved malleable. The revolution would be waged by a new proletariat, one with different grievances. According to Marcuse, “This revolution would find its impetus and origins not so much in economic misery, but in revolt against imposed needs and pleasures, revolt against the misery and the insanity of the affluent society.” In a different kind of cultural revolution, the New Left would mobilize “marginal groups” that had not been politicized before.[54]
By 1965, one year into the Great Society, a Freudo-Marxist framework was firmly established. Multiculturalism, feminism, and the student rights movements all placed themselves within the context of a broader crusade for liberation from Western capitalism’s oppression and repression.
The vanguard against capitalist expansion was Third World peoples, as yet uncorrupted by liberalism. Mills and Marcuse looked to Cuba for radical leadership to provide a third way toward freedom. Student radicals flew to Havana, where they met with Communist leaders who confirmed their heady ideas that they were the rebel leaders in an American civil war. SDS required its leadership to read Franz Fanon, a psychiatrist turned Algerian revolutionary, whose book The Wretched of the Earth was popular among student radicals. French philosopher Jean-Paul Sartre wrote a preface to the 1964 English edition, in which he denounced Western oppression and proclaimed Third World superiority. Fanon’s book, similar to Sartre’s own existential psychoanalysis, posits a psychology of colonialism in which the oppressed internalize the symbols of their oppressors. He prescribes revolting against Europe and the West and implementing a new third way of achieving the humanist ideals that Europeans had failed to achieve. Europeans, in turn, must look to the Third World for their own salvation.
In America, Fanon’s colonial theory complimented Malcolm X’s black nationalism, which viewed blacks as a people colonized by imperialist Americans. Angered by the influence of whites within the Student Nonviolent Coordinating Committee (SNCC), black power advocates expelled them in 1965–1966. A 1967 Chicago SNCC leaflet stated, “We have to all learn to become leaders for ourselves and remove all white values from our minds.... We must fill ourselves with hate for all white things.”[55]
The Black Panthers turned to “revolutionary nationalism” to reinvigorate this sense of community. Stokely Carmichael and Charles Hamilton argued that blacks must reject their American identity and “reassert their own definitions, to reclaim their history, their culture; to create their own sense of community and togetherness.”[56] Following protests by student radicals, San Francisco State opened up the first Black Studies Department in 1968.
White students who traveled south to work in the civil rights movement both condemned their own culture and in their crusade identified with the oppressed. They admired the heroism and envied the sense of purpose that they encountered.
Mario Savio, who had worked in SNCC’s Freedom Summer weeks before, started the Berkeley Free Speech Movement in 1964 when campus police attempted to arrest an activist for setting up a display table. The state-funded universities, Savio concluded, were part of the same oppressive system that controlled the South. The universities of the liberal state were part of a manipulative machine, devoid of higher purpose and focused on power. Savio lashed out in his Sproul Hall address: “There’s a time when the operation of the machine becomes so odious, makes you so sick at heart, that…you’ve got to put your bodies upon the gears and upon the wheels…upon the levers, upon all the apparatus, and you’ve got to make it stop!”[57 ]
When SDS president Paul Potter, in a 1965 speech to 25,000 onlookers, exhorted his listeners to “name the system,” students knew it to be a thinly veiled reference to capitalism.[58] The next SDS president, Carl Oglesby, called it “corporate liberalism.” Tom Hayden urged the powerless students in the North to take inspiration from the powerless blacks who fought segregation in the South. The concern of the university, he wrote, should not be “passing along the morality of the middle class, nor the morality of the white man, nor even the morality of this potpourri we call ‘Western society.’”[59]
Students claimed that they were an oppressed and repressed minority, one that had a key role to play in the revolution. One SDS member’s speech, “Toward a Student Syndicalist Movement,” links in common victimization college students with the bombed villages in Vietnam; another member gave a speech calling on white-collar workers, who were in reality repressed slaves, part of “the new working class,” to reject their white chauvinism and join Third World revolutionaries against Western capitalist oppression.[60]
Sexual Politics. Radical feminism, according to Sarah Evans, a student radical, one of the first historians of the movement, and today a professor at the University of Minnesota, began as a form of Mills’s personal politics that proceeded from the civil rights movement. Female civil rights workers associated Southern segregation between the races with fears of miscegenation. Hence, racism did not stand alone; it protected the entire Southern patriarchal family and culture, in which women played a traditional role. Student women who were crusading for equal rights at first accepted traditional roles in the movement—cleaning and secretarial work.
What was called “radical feminism” began as a revolt against male chauvinism within the civil rights movement. A 1964 SNCC paper noted that “this is no more a man’s world than it is a white world.” The following year, Casey Hayden and Mary King equated the “racial caste system” with “the sexual caste system.”[61] Feminist activists placed this oppression within a Marxist framework, applying colonial theory: “As we analyze the position of women in capitalist society and especially in the United States we find that women are in a colonial relationship to men and we recognize ourselves as part of the Third World.”[62]
In their fight for independence, women in all classes could find common interests and create new symbols of unity more powerful than those of American liberalism, especially the “family unit [that] perpetuates the traditional role of women and the autocratic and paternalistic role of men.” With the creation of an identity group, feminists could fracture American society, entrench a new political position, and demand new rights. An early feminist manifesto demanded state provision of birth control, abortion, and free child care.[63]
Evans recalls that in 1967, she witnessed the “creation of a new, radical feminist movement.”[64] In 1969, along with hundreds of other women all over the country, she entered graduate school “determined to study women’s history.” Women’s studies courses were first offered in 1969; degrees followed in 1970.
Feminism grew stronger as a social movement during the 1970s, expanding through “consciousness-raising” groups. Feminists won new political rights, including:
The 1964 Civil Rights Act, which bars discrimination in hiring on the basis of sex;
Title X of the Public Health Service Act (1970), which provides access to contraceptive services;
Title IX of the 1972 Education Amendments, which bars discrimination on the basis of sex in “any education program or activity receiving federal financial assistance” and that requires federal funds be allocated equally to male and female collegiate programs; and
The 1978 Pregnancy Discrimination Act, which barred discrimination on the basis of “pregnancy, childbirth, or related medical conditions.”
Women as a civil rights group received heightened scrutiny protection in Craig v. Boren (1976). The ACLU lawyer in that case, Ruth Bader Ginsberg, who now sits on the Supreme Court of the United States, recently commented that feminism as a movement would not be over until the Supreme Court had nine female justices.[65]
The feminist movement’s concept of gender went hand in hand with the sexual revolution, which Marcuse said was essential for a political revolution: “The New Left should develop the political implications of the moral and sexual rebellion of the youth…. [W]e should try to transform the sexual and moral rebellion into a political movement.”[66] Because, as some noted, feminism crusades to end the constraints of “femaleness,” it advocates one’s right to claim any gender without discrimination.
While numerous works in the 1960s had sensationalized the “homosexual underworld,” Gore Vidal’s works gave it a human face. In The City and the Pillar (1948), he wished to show the “‘naturalness’ of homosexual relations, as well as [make] the point that there is of course no such thing as a homosexual…. [T]he word is an adjective describing a sexual action, not a noun describing a recognizable type. All human beings are bisexual.”[67]
The 1969 Stonewall Riots, in which homosexuals fought New York City police, is frequently labeled the beginning of the gay rights movement. It was commemorated the following year in the first gay pride marches in Los Angeles, Chicago, and New York. In 1969, Paul Goodman wrote “The Politics of Being Queer,” which identified homosexuals as another civil rights group that is politically repressed and oppressed. He begins, “In essential ways, my homosexual needs have made me a nigger.”[68] Gay, lesbian, and transgendered rights were recognized as an issue of radical solidarity. In a 1970 open letter, Black Panther Huey Newton promoted an alliance between black revolutionaries and “the Women’s Liberation and Gay Liberation Movements.”[69] Sexual minorities began to crusade for civil liberties and civil rights.
The Supreme Court carved out an entirely new realm of civil liberties under the “right to privacy.” Under Ninth Amendment police powers, states had passed laws to uphold what Chief Justice Warren Burger called the “Judeo–Christian moral and ethical standards” of “Western civilization.”[70] Following the cultural shift, between 1965 and 1977, the Court replaced this “Judeo–Christian” morality with the new progressive morality.
According to the Court, the “autonomy of the person” is constitutionally respected in “decisions relating to marriage, procreation, contraception, family relationships, child rearing, and education.”[71] Sexually, the Court recognized new rights for married adults, single adults, and minors to buy contraception. So too was the old Judeo–Christian notion of the “person concept” replaced with the Court’s recognition of a woman’s right to an abortion.[72] The Court finally overturned sodomy laws as an unconstitutional violation of the privacy rights of consenting adults.
The Court has not altogether rejected a role for moral legislation: Sadistic acts, Justice Anthony Kennedy has recognized, deserve no constitutional protection, for they constitute “moral depravity.”[73] However, Justice Kennedy repeated the new morality, which extends individual autonomy to “consensual sexual relations conducted in private.” Such acts constitute “private conduct not harmful to others.”[74]
Sexual minorities have also been recognized as groups that require civil rights protection under the Equal Protection Clause. In 1996, the Court overturned a Colorado law banning special protections for gays and lesbians because it “named as a solitary class persons who were homosexuals, lesbians, or bisexual either by ‘orientation, conduct, practices or relationships’…and deprived them of protection under state antidiscrimination laws.” The Court recognized the motive as sadism “born of animosity toward the class of persons affected,” hence with no “rational relation to a legitimate governmental purpose.”[75] This year, the Obama Administration filed a brief arguing that California’s ban on same-sex marriage “violates the fundamental constitutional guarantee of equal protection.”[76]
Conclusion
When asked in 1974 whether the New Left had succeeded, Herbert Marcuse said that it had “changed the consciousness of broad sectors of the population.”[77] He was right: Over the past 50 years, neo-progressives have successfully implemented Reich’s sexual revolution and Mills’s identity politics. Race, class, and gender studies are the core of the modern liberal curriculum at public schools and universities. Today, the New Left not only controls the Democratic Party, but also has taken root broadly in upper-middle-class American culture.
Neo-progressives assent to an underlying logic for the good life and the good society, but that logic is radically different from the previous liberal morality. The cultural shift has granted all Americans unprecedented individual freedoms in sexual expression. So too has it erected a new politically correct morality along with an official narrative that highlights the West as the engine of oppression and repression.
Conservatives and old liberals who seek to oppose these changes must return to where they lost the battle: the intellectual arena. They should first begin with a genealogy of neo-progressivism to weaken the myths that sustain it. They should also take a lesson from Mills, who begged his readers to ask:[78]
What varieties of men and women now prevail in this society and in this period?... In what ways are they selected and formed, liberated and repressed, made sensitive and blunted? What kinds of “human nature” are revealed in the conduct and character we observe in this society in this period.
On such an intellectual foundation, they might successfully engage, as Mills also wrote, in the “struggles over the types of human beings that will eventually prevail.”[79]
—Kevin Slack is an Assistant Professor of Politics at Hillsdale College.
[48] Hayden preferred this term to “power elite” as “more accurate because of its focus on the joined political and economic institutions.” See Tom Hayden, Radical Nomad: C. Wright Mills and His Times (Boulder, Colo.: Paradigm Publishers, 2006), p. 135.
[49] Allen J. Matusow, The Unraveling of America: A History of Liberalism in the 1960s (Athens, Ga.: University of Georgia Press, 2009), pp. 217ff. | Women’s studies courses were first offered in 1969; degrees followed in 1970.
Feminism grew stronger as a social movement during the 1970s, expanding through “consciousness-raising” groups. Feminists won new political rights, including:
The 1964 Civil Rights Act, which bars discrimination in hiring on the basis of sex;
Title X of the Public Health Service Act (1970), which provides access to contraceptive services;
Title IX of the 1972 Education Amendments, which bars discrimination on the basis of sex in “any education program or activity receiving federal financial assistance” and that requires federal funds be allocated equally to male and female collegiate programs; and
The 1978 Pregnancy Discrimination Act, which barred discrimination on the basis of “pregnancy, childbirth, or related medical conditions.”
Women as a civil rights group received heightened scrutiny protection in Craig v. Boren (1976). The ACLU lawyer in that case, Ruth Bader Ginsberg, who now sits on the Supreme Court of the United States, recently commented that feminism as a movement would not be over until the Supreme Court had nine female justices.[65]
The feminist movement’s concept of gender went hand in hand with the sexual revolution, which Marcuse said was essential for a political revolution: “The New Left should develop the political implications of the moral and sexual rebellion of the youth…. [W]e should try to transform the sexual and moral rebellion into a political movement. ”[66] Because, as some noted, feminism crusades to end the constraints of “femaleness,” it advocates one’s right to claim any gender without discrimination.
While numerous works in the 1960s had sensationalized the “homosexual underworld,” Gore Vidal’s works gave it a human face. | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://www.cliohistory.org/click/body-health/sexual | Click - Changing Sexual Attitudes - Women's Sexuality History ... | Search This Exhibit
Body & Health
Changing Sexual Attitudes and Options
Going to jail for providing contraceptives to married women? Margaret Sanger believed that birth control was the key to women’s personal freedom.
Excerpt from “Margaret Sanger: A Public Nuisance,” a film by Terese Svoboda and Steve Bull. (Running time 5:36) Used with permission. The complete film is available from Women Make Movies.
The simultaneous appearance of the sexual revolution of the 1960s and the revival of feminism — both symbolized in the popular mind by the fashion trend of the miniskirt — suggested a causal link between the two, but each had its own history. The sexual revolution basically opened to women many of the pleasures and responsibilities of sexual expression that had previously been reserved for men, challenging the double standard that allowed men sexual license but punished women who claimed the same freedom. The recognition of women’s sexual needs and desires, which extended first to married women and then, somewhat more tentatively, to single women, and wider acceptance of women’s ability to choose when and with whom to have sexual relations make up some of the most far-reaching changes of recent history.
Ellen Key
Changing attitudes about female sexuality began to take hold early in the twentieth century. The popularization of the ideas of Sigmund Freud, Havelock Ellis, and Ellen Key promoted an ideal wherein sexual satisfaction, not repression, limitation, or abstinence, was encouraged. And access to a satisfying sexual life was just as important for wives as for husbands. This view of companionate marriage was widely accepted by the 1920s. Historians Estelle Freedman and John D’Emilio summed up this change: the meaning of sexuality shifted “from a primary association with reproduction within families to a primary association with emotional intimacy and physical pleasure for individuals.”
These evolving attitudes about women’s sexuality intersected with, and indeed were made possible also by, another long-term trend: the ability to regulate conception, both to limit the number of children and to offer opportunities for sexual expression that did not result in procreation. Traditionally, without the ability to choose when to bear children and how many to have, women’s lives could be upended by an unplanned pregnancy. Although rudimentary birth control information and devices were available in the nineteenth century (which helps explain how the birthrate dropped by half from 1800 to 1900), they were generally illegal, especially after the passage of the Comstock Act of 1873. Birth control methods such as withdrawal or condoms required the cooperation of male partners, which was not always forthcoming; female methods such as douching, using pessaries (an early form of diaphragm), or limiting intercourse were better than nothing but were still hit or miss.
Margaret Sanger
Margaret Sanger, a public health nurse with feminist sensibilities, set out to change women’s lack of control over their reproductive lives by opening the first birth control clinic in the country in New York City in 1916. She was quickly arrested. Thus began her forty-year campaign to make birth control legal and widely available. Her device of choice was the diaphragm, which maximized women’s control over the sex act. In the 1920s Sanger deliberately allied herself with the medical establishment in the hope that their support would bolster what was still seen as a dangerously radical cause. In effect, she gave doctors control over the dissemination of birth control information and devices. At the same time she located women’s sexuality squarely within the construct of marriage, a far less radical stance than her original focus on women’s sexual liberation.
In the 1930s and 1940s, Sanger successfully chipped away at restrictions limiting access to birth control for married women, although the issue wasn’t finally settled until the 1965 Supreme Court case of Griswold v. Connecticut. Far more controversial was the idea of single women having easy access to birth control, precisely because that meant condoning premeditated sexual activity outside the institution of marriage. This is where the birth control battles were fought in the 1960s and beyond.
Single women had already been expanding the boundaries of acceptable sexual expression over the course of the twentieth century, as Alfred Kinsey found in his 1953 study of female sexual behavior. But there were still clear lines about what was considered proper sexual behavior for most young women, depending on the norms of various communities. Generally speaking, extensive “petting” short of intercourse was tolerated; “proper” or “morally clean” young women were supposed to keep their virginity for their wedding night. “Going all the way” carried a distinct social stigma, as well as the risk of unplanned pregnancy.
Photo by T. Takemoto
If single women did engage in sexual relations (as many clearly did), they often did so without access to reliable birth control information. One mistake could literally change a woman’s life. If a teenager or young adult found herself pregnant, she could either hastily marry the father, seek an illegal abortion, raise the baby as a single mother, or put the baby up for adoption.
The possibilities for increased sexual activity without fear of pregnancy took a giant leap forward in the 1960s with the introduction of two new forms of contraception, the birth control pill (first offered in 1960 and widely available by mid-decade) and the intrauterine device (IUD). Somewhat ironically, wider access to safe, reliable contraception put more pressure on women to engage in sexual relationships with men now that fear of pregnancy was removed. And yet feminist consciousness raising helped women imagine sexuality from a female point of view, not just in terms of pleasing men. This was a truly revolutionary perspective for many women.
How to Navigate our Interactive Timeline
Place the cursor over the timeline to scroll up and down within the timeline itself. If you place the cursor anywhere else on the page, you can scroll up and down in the whole page – but the timeline won’t scroll.
To see what’s in the timeline beyond the top or bottom of the window, use the white “dragger” located on the right edge of the timeline. (It looks like a small white disk with an up-arrow and a down-arrow attached to it.) If you click on the dragger, you can move the whole timeline up or down, so you can see more of it. If the dragger won’t move any further, then you’ve reached one end of the timeline.
Click on one of the timeline entries and it will display a short description of the subject. It may also include an image, a video, or a link to more information within our website or on another website.
Our timelines are also available in our Resource Library in non-interactive format.
Yellow bars mark entries that appear in every chapter
This icon indicates a book
This icon indicates a film
1971 The Click! Moment
The idea of the “Click! moment” was coined by Jane O’Reilly. “The women in the group looked at her, looked at each other, and ... click! A moment of truth. The shock of recognition. Instant sisterhood... Those clicks are coming faster and faster. They were nearly audible last summer, which was a very angry summer for American women. Not redneck-angry from screaming because we are so frustrated and unfulfilled-angry, but clicking-things-into-place-angry, because we have suddenly and shockingly perceived the basic disorder in what has been believed to be the natural order of things.” Article, “The Housewife's Moment of Truth,” published in the first issue of Ms. Magazine and in New York Magazine. Republished in The Girl I Left Behind, by Jane O'Reilly (Macmillan, 1980). Jane O'Reilly papers, Schlesinger Library.
Scroll, click and discover! You will find unique content in each chapter’s timeline.Timeline How-To
1946 Common Sense
The enormous popularity of Dr. Benjamin Spock’s Common Sense Book of Baby and Child Care was due to its empathetic tone, respect for mothers, the focus on children’s emotional needs, calls for flexibility in the raising of children, and its comprehensive coverage. The book changed the way baby-boomers were raised. Dr. Spock obituary. “Feminists Protest Spock's Sex Bias,” by Louise Bernikow.
1948 The Kinsey Reports
Alfred Kinsey and his team at the Kinsey Institute published two Kinsey Reports about modern sexual behavior: Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953). The books, while academic in nature, expanded the public discussion about sexuality. History of The Kinsey Institute. Review in New York Times. Dr. Kinsey obituary. Photo: staff at Kinsey Institute for Sex Research; Smithsonian Institution archives.
1952 Christine Jorgensen
Christine Jorgensen, born George Jorgensen, became the first person to complete a sex reassignment surgery. Her initial surgeries were performed in Denmark. Jorgensen became a public advocate for transsexuals. Christine Jorgensen’s website.
1953 Playboy
In December 1953, the first issue of Hugh Hefner’s Playboy magazine featured film star Marilyn Monroe on its cover and inside was a picture of a nude Monroe as the Sweetheart of the Month (now called the Playmate of the Month). Almost 55,000 copies, at 50 cents each, were sold. History of Playboy. “A Bunny’s Tale” by Gloria Steinem.
1956 Althea Gibson
When Althea Gibson won the French Open she became the first African American to win a tennis Grand Slam title. The next year, after she won Wimbledon and the U.S. Nationals, she became the top-ranked woman tennis player. In 1957, she became the first Black woman to appear on the cover of Time magazine. Biography. Tennis Hall of Fame. Brooklyn Museum exhibit. Photo: Library of Congress.
1957 Odd Girl Out
Ann Bannon’s lesbian pulp fiction novel, Odd Girl Out, was the first in a series of books about lesbian love and relationships. The series, known as The Beebo Brinker Chronicles, was published from 1957 to 1962 and the books are now considered lesbian classics. Bannon (a pseudonym for Ann Weldy) is recognized as a pioneering lesbian writer. Ann Bannon’s website.
1962 Sex and the Single Girl
Helen Gurley Brown’s best-selling self-help book, Sex and the Single Girl, told women how to meet men, act sexy, succeed at work, handle money, entertain, dress, and have an affair. In 1965, Brown became the editor-in-chief of Cosmopolitan magazine. Brown told the New York Times in 1982: “I am a feminist.” Obituary, Washington Post. Obituary, New York Times. Photo: Library of Congress.
1962 Sherri Finkbine
Sherri Finkbine, a married woman from Arizona, sought an abortion after discovering that the drug Thalidomide, which she had been taking, causes deformity in fetuses. She was forced to travel to Sweden because her local hospital refused to allow the abortion. Finkbine’s story raised public awareness about the drug Thalidomide and contributed to changes in abortion laws. Abortion Collection, Smith College. “Abortion mother returns home,” BBC.
1963 The Bell Jar
Sylvia Plath’s semi-autobiographical novel The Bell Jar tells the story of a young woman’s struggle with depression and her feelings about women’s roles. Plath committed suicide soon after the book was published. Biography. Reflections on her legacy.
1963 The Feminine Mystique
Betty Friedan’s study about “the problem that has no name” was an instant best-seller and raised public awareness about the lives of well-educated women. The book is considered a foundational text of the post-World War II women’s movement. Review in NY Times, 1963. Review in The Atlantic, 2013.
1966 National Organization for Women
Seen by its founders as a “NAACP for women,” NOW was established to work independently of government agencies in the effort to increase women’s rights and fight sex discrimination. In 1968, NOW endorsed the passage of the Equal Rights Amendment. NOW. NOW records, Schlesinger Library. Photo, founders of NOW.
1967 Twiggy arrives in New York
When the English model Twiggy arrived in New York City in 1967 she was already an icon with her thinness, bobbed hair, and “Twiggy Dresses.” The “Twiggy phenomenon” lasted long after Twiggy retired from modeling in 1970. Photos, Life Magazine. Twiggy’s website.
1968 Miss America Pageant protest
New York Radical Women organized this protest to bring public attention to sexism, especially society’s ideas about women and beauty. Some of the 400 protestors tossed into a “Freedom Trash Can” items they considered demeaning or oppressive, including women’s magazines, wigs, high heels, and bras. The protest brought media attention to the feminist movement. Video excerpt from “Miss America,” a Clio film.
1968 Myth of the Vaginal Orgasm
Anne Koedt’s classic feminist text The Myth of the Vaginal Orgasm argues that the clitoris, not the vagina, is the center of a woman’s sexual sensitivity and that ignorance about this leads to ideas about women’s frigidity. Koedt drew on the work of Alfred Kinsey and Masters and Johnson. Full text.
1968 Poor Black Women
Patricia Robinson’s “Poor Black Women” was written in response to “Birth Control Pills and Black Children,” a statement by the Black Unity Party of Peekskill, New York, and “The Sisters Reply” to that statement. Robinson writes that the time had come for black women “to question aggressive male domination and the class society which enforces it, capitalism.” “Poor Black Women,” Duke University Digital Collections.
1969 NARAL
NARAL, the National Association for the Repeal of Abortion Laws, was created in 1969 to eliminate laws that could compel a woman to bear a child against her will. NARAL’s initial work included political actions to repeal abortion laws. After Roe v. Wade, NARAL’s new name was the National Abortion Rights Action League. NARAL. NARAL records, Schlesinger Library.
1969 Woodstock
The festival called Woodstock was held in Bethel, New York and attended by close to half a million people. The “3 Days of Peace and Music” is considered one of the great music festivals of all time and a defining moment for the counterculture of the 1960s. Woodstock Festival History. Photo by Mark Goff; public domain.
1970 Are You There God?
Judy Blume’s novel, Are You There God? It’s Me, Margaret, centers on a sixth grader’s life as she deals with her family’s Jewish and Christian religions and tackles coming-of-age issues such as her changing body, a new school environment, and boys. Judy Blume’s website. Judy Blume biography.
1970 “Take a Good Look”
In her essay “Take a Good Look at Our Problems,” which was published in the book Black Women’s Liberation, Pamela Newman wrote “The very idea that women are here on earth just for having children isn’t true either. We have minds and have the right to determine what we do and say. Child rearing should be a profession, not an automatic duty.” Full text (page 12), Duke University Digital Collections.
1970 “Women and Their Bodies”
“Women and Their Bodies” was first a workshop at a women’s liberation conference at Emmanuel College (near Boston). Workshop members founded the Doctor’s Group, which became the Boston Women’s Health Book Collective. The Doctor’s Group issued a 193-page course booklet, “Women and Their Bodies,” which was reissued in 1971 as Our Bodies, Ourselves. History of Our Bodies, Ourselves.
1970 “Women in Revolt”
Newsweek’s “Women in Revolt” cover story on the women’s movement ran on the same day that 46 women Newsweek employees, with Eleanor Holmes Norton as their lawyer, filed an EEOC complaint charging the magazine with sex discrimination. The women charged that women were hired as researchers and men were hired as writers. For its cover story on the women’s movement, Newsweek hired a woman freelance writer. The Good Girls Revolt by Lynn Povich. Review and interview on NPR.
1971 Dalkon Shield
The Dalkon Sheild, a contraceptive intrauterine device (IUD), was promoted by the A. H. Robins Company as a safe and reliable form of birth control when it went on the market in 1971. The deaths of eighteen women and injuries to thousands more led to court cases and settlements that cost billions of dollars. In 1976, the U.S. Food and Drug Administration mandated that IUDs be tested and approved. “The Charge: Gynocide” by Barbara Ehrenreich. National Women's Health Network records.
1971 The Click! Moment
The idea of the “Click! moment” was coined by Jane O’Reilly. “The women in the group looked at her, looked at each other, and ... click! A moment of truth. The shock of recognition. Instant sisterhood... Those clicks are coming faster and faster. They were nearly audible last summer, which was a very angry summer for American women. Not redneck-angry from screaming because we are so frustrated and unfulfilled-angry, but clicking-things-into-place-angry, because we have suddenly and shockingly perceived the basic disorder in what has been believed to be the natural order of things.” Article, “The Housewife's Moment of Truth,” published in the first issue of Ms. Magazine and in New York Magazine. Republished in The Girl I Left Behind, by Jane O'Reilly (Macmillan, 1980). Jane O'Reilly papers, Schlesinger Library.
1971 “Rape: The All-American Crime”
Susan Griffin’s “Rape: The All-American Crime” appeared in the New Left journal Ramparts. It was then published as a book titled Rape: The Power of Consciousness. Griffin argues that rape must not be understood as a sexual act but as resulting from the political system of patriarchy. Full text.
1971 “The Motherhood Myth”
Bev Cole’s essay “Black Women and the Motherhood Myth” was published in “The Right to Choose Abortion,” a pamphlet issued by Female Liberation, a Boston women’s liberation group. Cole’s first sentence is “The abortion issue must be faced by each and every woman, especially Black and Third World women.” Full text, p. 52, Before Roe v. Wade (PDF).
1972 Feminists for Life
Feminists for Life, an anti-abortion organization, was founded with the vision of creating “a better world in which no woman would be driven by desperation to abortion.” The organization provides a “herstory worth repeating” about how “early feminists” were “overwhelmingly pro-life.” Feminists for Life. Video, “The Feminist Case Against Abortion.”
1972 The Healthy Homosexual
George Weinberg coined the term homophobia in his book Society and the Healthy Homosexual. The book, which criticized psychiatric and psychological theories about sexuality, was considered a revolutionary manifesto. George Weinberg interview.
1972 Witches, Midwives and Nurses
Barbara Ehrenreich and Deirdre English’s Witches, Midwives and Nurses — as well as their two subsequent books, Complaints and Disorders (1973) and For Her Own Good (1978) — are popular feminist texts that raised awareness about women’s historical involvement and the patriarchal controls that led to their loss of power in the healthcare arena. Excerpt (PDF). Barbara Ehrenreich video interview.
1972 “Chicanas and Abortion”
Beverly Padilla published “Chicanas and Abortion” in The Militant. She argues that a woman’s control of her body and mind is a “collective problem,” not a personal issue. The article also addresses the relationship of Chicanas to Chicano men. Reprinted in Chicana Feminist Thought, p. 120.
1973 COYOTE
COYOTE (Cut Off Your Old Tired Ethics) was founded by Margo St. James to advocate for the rights of sex workers (including prostitutes, strippers, and those involved in pornography). St. James argued that sex workers should be considered laborers with the right to the same protections given other workers. COYOTE. COYOTE records, Schlesinger Library. St. James Infirmary.
1973 Fear of Flying
In Fear of Flying, a novel about a woman’s sexual fantasies and acts, Erica Jong explored the psychological effects of spontaneous sexual encounters on the narrator’s marriage, poetry, and personal liberation. Erica Jong’s website. Video interview.
1973 Our Bodies, Ourselves
Written by members of the Boston Women’s Health Book Collective, this 276-page book of information, illustrations, and personal narratives aimed to empower women to understand their bodies and navigate the health care system. Our Bodies Ourselves.
1974 Women’s Advocates
When Women’s Advocates was founded in Minneapolis in 1974 it became the first battered women’s shelter in the United States. At first it was a hotline; then a house was purchased. The group now provides care for about 45 women and children every day. Women’s Advocates. Sharon Rice-Vaughan video interview.
1975 Take Back the Night
In Philadelphia, after the murder of a woman who was walking home alone at night, feminists held a march to raise public awareness of violence against women. In Pittsburgh in 1977, the name “Take Back the Night” was coined by Anne Pride at an anti-violence rally. Take Back the Night marches are now held annually across the country. Take Back The Night. National Online Resource Center on Violence Against Women.
1975 UN International Women’s Year
The United Nations declared 1976 to 1985 the Decade of Women and four international conferences on women were held, in Mexico City (1975), Copenhagen (1980), Nairobi (1985), and Beijing (1995). A result of the conferences has included resolutions to elimination discrimination and violence against women. UN Women. Declaration of Mexico, 1975. Full report.
1976 La Casa de las Madres
Founded by women in the California Bay Area, La Casa de las Madres was the first domestic violence shelter founded in the state and the third in the nation. In 2014, Sonia Melara, one of the shelter’s founders, was appointed to the San Francisco Police Commission. La Casa de las Madres.
1976 Planned Parenthood v. Danforth
The Supreme Court case of Planned Parenthood of Central Missouri v. Danforth stated that women seeking an abortion did not require parental or spousal consent. The court also ruled that abortion providers must do certain record keeping and reporting. Case documents, Cornell University. Case Summary.
1976 The Hite Report
The Hite Report on Female Sexuality, by historian Shere Hite, documents responses to a questionnaire Hite distributed to women asking about their sexual experiences and feelings. Women’s orgasms are a key subject of the book. Shere Hite’s website. Review in The New York Times.
1976 Title IX Protest at Yale
The women of the Yale Crew wrote Title IX on their bodies and read a protest to the Director of Women’s Athletics about unequal facilities and treatment, and finally obtained showers at the Yale Boathouse. Newspaper coverage raised public awareness about the implementation of Title IX. Video excerpt from the film “A Hero for Daisy” used with permission from 50 Eggs. “Title IX Pioneer Honored by Sports Museum of New England.”
1976 WAVAW
Women Against Violence Against Women was founded in Los Angeles in 1976 to protest the film “Snuff.” Boston women founded their chapter a year later after the release of the Rolling Stones album “Black and Blue.” The focus of their work was on protesting violence against women in the media. The group disbanded in 1984. WAVAW records, Northeastern University. WAVAW archive, UCLA.
1977 CESA
The article “Sterilization Abuse: A Task for the Women’s Movement,” written by members of the Chicago Committee to End Sterilization Abuse (CESA), provided statistics and shocking stories about involuntary sterilization of women, mostly women of color. The article concluded that in addition to legal battles against sterilization abuse, it is important for provide education about patient rights to communities most effected by the abuse. Full text. History of CESA. Relf v. Weinberger.
1978 Carol Leigh & “Sex Worker”
Carol Leigh is an artist, prostitute and activist known as Scarlot Harlot. While attending a San Francisco Women Against Violence in Pornography and Media Conference, she coined the term “sex worker” to describe all individuals who work in the sex industry. Biography. Origins of terminology, Prostitutes Education Network.
1979 The Dinner Party
Judy Chicago, a feminist artist, created The Dinner Party to honor women in history. The art installation created controversy because each plate depicts a vulva symbol and as one moves forward chronologically the vulvas become larger as a way to depict women’s empowerment. The Dinner Party is now on permanent exhibit at the Elizabeth A. Sackler Center for Feminist Art at the Brooklyn Museum. Judy Chicago’s website. Brooklyn Museum.
1980 Ronald Reagan
The election of Ronald Reagan as President of the United States and Republican control of the U.S. Senate signaled America’s conservative political turn. Reagan opposed abortion rights, gender equality, affirmative action, and many of the policies of the Great Society. Reagan Presidential Foundation. Reagan Presidential Library.
1982 Gay Games
The Gay Games (first called the Gay Olympics) was founded in San Francisco by Tom Waddell in 1982 as a “celebration of freedom.” Over 1,000 athletes from 10 countries competed in 17 sports at the first Gay Games. Over 10,000 people competed in the 1994 Games, making it a larger event than the Olympics. The Gay Games are held every four years at various international locations. Federation of Gay Games. Federation of Gay Games archive (PDF).
1982 Ohoyo One Thousand
Ohoyo One Thousand: A Resource Guide of American Indian/Alaska Native Women was compiled by the Ohoyo (“woman” in Choctaw) Resource Center, which was founded by Choctaw women in 1979. The publication profiles more than 1,000 women from 321 tribes who have achieved success in their respective fields. The Ohoyo Resource Center also published Ohoyo Makachi: Words of Today’s American Indian Women.Ohoyo One Thousand. Ohoyo Makachi. Owanah Anderson biography.
1983 Asian Immigrant Women Advocates
Asian Immigrant Women Advocates was founded as a grassroots organization to provide educational assistance to Asian immigrant women and to engage in social justice campaigns. The Garment Workers’ Justice Campaign from 1992 to 1998 raised public awareness about corporate responsibility to workers. AIWA.
1983 Lesbian & Gay Community
The Lesbian and Gay Community Services Center opened in New York City to provide an affordable space for lesbian and gay groups to hold meetings. Among the founding groups of the Center were the Gay and Lesbian Switchboard, which was established in 1972, and the Coalition for Lesbian and Gay Rights, which led the successful fight for the passage of New York City’s 1986 Gay Rights Bill. In 2001 the name was changed to the Lesbian, Gay, Bisexual & Transgender Community Center. History of The Center. Lesbian and Gay Community Services Center records.
1984 Akwesasne Mother’s Milk
The Akwesasne Mother’s Milk Project was founded by the midwife Katsi Cook, a member of the Mohawk tribe. In response to environmental concerns surrounding the development of the St. Lawrence Seaway Project, the project began monitoring PCB levels in breast milk. The project worked on a number of environmental justice issues connected to women and children’s health. Katsi Cook papers, Sophia Smith Collection. Katsi Cook biography. “Into Our Hands” by Katsi Cook.
1984 Grove City v. Bell
The Supreme Court case of Grove City v. Bell stated that Title IX applied to specific programs receiving federal funding, such as athletic scholarships, not all athletic activities in an educational setting. Grove City was reversed by the Civil Rights Restoration Act of 1987, which stated that schools receiving federal funds must comply with civil rights laws in all areas. Clearly the battle for equality in sports was not over. Case documents.
1984 Pluralism and Abortion
“A Catholic Statement on Pluralism and Abortion,” a full-page advertisement in The New York Times paid for by Catholics for a Free Choice, was published in support of vice-presidential candidate Geraldine Ferraro, who had been attacked for stating that Catholics did not have a monolithic position on abortion. The ad stated that only 11% of U.S. Catholics disapproved of abortion in all circumstances and that a large number of Catholic theologians believed that “abortion, though tragic, can sometimes be a moral choice.” Full text (PDF).
1985 Connexxus & Centro des Mujeres
Connexxus, a women’s center, was founded by Adel Martinez and Lauren Jardine to provide health and social services to lesbians in the Los Angeles area. The first center was in West Hollywood. A second center, Connexxus East/Centro des Mujeres, opened in East Los Angeles in 1986. Connexxus closed in 1990. The group was notable for having an operating budget of over $200,000. Connexxus/Centro des Mujeres archive. Summary, Mazer Lesbian Archives.
1985 Minority Task Force on AIDS
Setsuko (Suki) Terada Ports founded the Minority Task Force on AIDS under the auspices of the Council of Churches in New York City as an advocacy service that emphasized the needs of black and Hispanic people. Ports also started the Family Health Project in 1988 to expand her work with low-income families affected by AIDS. The Minority Task Force is now known as FACES NY. Setsuko Ports interview (PDF). FACES NY.
1986 Bowers v. Hardwick
In Bowers v. Hardwick, the Supreme Court ruled that gay adults do not have the constitutional right to engage in consensual sodomy and that the Georgia Sodomy Law was legal. In 2003, the Court ruled in Lawrence v. Texas that that state’s sodomy law was unconstitutional and all adults have the right to engage in private sexual activity. Case documents. Summary, PBS. Harry A. Blackmun papers, Library of Congress.
1987 ACT UP
ACT UP, the Aids Coalition to Unleash Power, was initiated by the playwright Larry Kramer. The group held its first action on Wall Street in New York City to protest how pharmaceutical companies were profiting from the AIDs epidemic. ACT UP New York. Maxine Wolfe interview.
1987 NAMES Quilt Project
San Francisco gay rights activists, wanting to memorialize those who lost their lives to AIDS, created the Memorial Quilt and the NAMES Project Foundation. The quilt, with 1,920 panels, was first displayed at the 1987 National March on Washington for Lesbian and Gay Rights. AIDS Memorial Quilt.
1987 National Women's History Month
Congress designated the month of March to celebrate women’s historical accomplishments following celebrations of International Women’s Day (first celebrated in 1911 on March 8) and Women’s History Week (established in 1980). The National Women’s History Project played a key role in developing Women’s History Month. National Women’s History Project.
1988 National Coming Out Day
October 11, National Coming Out Day, was established by Jean O’Leary and Robert Eichberg after the 1987 March on Washington for Lesbian and Gay Rights. The annual event is considered a celebration of individual lives and community acceptance of gays and lesbians. Summary, Human Rights Campaign. Exhibition, Cornell University.
1989 Asian Communities
Asian Communities for Reproductive Justice was founded to protect women’s reproductive rights, health, and justice by using a social justice framework, which explores the intersection of racism, sexism, xenophobia, heterosexism, and class oppression. The group’s former name was Asians and Pacific Islanders for Reproductive Health, and is known now as Forward Together. “A New Vision” brochure (PDF).
1989 Webster v. Reproductive Health
Webster v. Reproductive Health Services was a serious challenge to Roe v. Wade, which made abortion a woman’s fundamental right. In a 5-4 decision, the Supreme Court ruled that the Missouri law’s provisions, which included a prohibition on public funding for abortions, was constitutional. Pro-choice advocates saw the decision as an encouragement to states to pass restrictive antiabortion laws. Case documents. Video, oral arguments. “Can Pro-Choicers Prevail?” by Margaret Carlson.
1989 “We Remember”
The brochure “We Remember: African American Women Are for Reproductive Freedom” declared women’s right to abortion and asserted that women of color have the right to choose whether to have children. It was written in response to the Supreme Court’s 1989 Webster v. Reproductive Health Services decision. The statement begins: “Choice is the essence of freedom.” In 1990, the authors created the group African-American Women for Reproductive Freedom to continue their work. Full text. “African-American Women and Abortion” by Loretta Ross.
1990 Domestic Violence Coalition
The Domestic Violence Coalition on Public Policy was founded to advocate for federal legislation and evolved into a national alliance of advocates and activists. One of its major efforts was the passage of the Violence Against Women Act (1994). It is now known as the National Network to End Domestic Violence.
1990 Empowerment Through Dialogue
The Empowerment Through Dialogue conference was organized by the Native American Women’s Health Education Resource Center in South Dakota. The event was attended by Native American women from eleven tribes. They created a Native Women’s Reproductive Rights Coalition. By 1994, the Reproductive Rights Coalition included 150 women from 26 tribes. Historical Note, Sophia Smith Collection. Reproductive Justice Agenda.
1990 NIH Research on Women’s Health
Since 1990, the National Institutes of Health Office of Research on Women’s Health has promoted research on women’s health, worked to ensure that women are represented in health research studies, and supported women’s advancement in biomedical careers. It is the first public health service office to focus specifically on promoting women’s health research. National Institutes of Health Office on Women’s Health.
1990 The Beauty Myth
Naomi Wolf’s The Beauty Myth argues that while women have made legal and economic gains, they have been psychologically and politically undermined by societal expectations about women’s physical beauty. A debate ensued about the book’s argument and Wolf’s use of facts. Naomi Wolf profile, The Guardian.
1990 The Women’s Collective
Pat Nalls started a hotline for women in the Washington D.C. area after her husband and daughter died of AIDS and she was diagnosed as HIV positive. The hotline evolved into a support group that became, in 1995, the Women’s Collective, to support women and families dealing with AIDS and HIV. The Women’s Collective.
1991 Backlash
Susan Faludi’s Backlash: The Undeclared War Against American Women, which won the National Book Critics Award for Nonfiction, examines the 1980s media backlash against feminism, which included unsubstantiated stories such as the “man shortage.” Reviewers compared it to Betty Friedan’s The Feminine Mystique. Susan Faludi's website.
1991 Jeanne Clery Act
In 1986, 19-year old Jeanne Clery was raped and murdered in her Lehigh University dorm room. Her parents formed the Clery Center for Security on Campus and lobbied Congress for what became the Jeanne Clery Act, or Crime Awareness and Campus Security Act. It requires colleges and universities to make public their security policies and crime reports, and to provide timely warnings about campus threats. Clery Center. Text of Jeanne Clery Act. U.S. Department of Education campus security resources.
1991 Pink Ribbon
The Pink Ribbon became associated with breast cancer awareness after pink ribbons were distributed by the Susan G. Komen Foundation in New York City. In 1992, the organizers of National Breast Cancer Awareness Month adopted the pink ribbon as its official symbol. Pink ribbon campaigns have been praised for raising awareness and criticized for the commercialization and corporatization of breast cancer. Susan G. Komen Foundation. Pink Ribbon International. Think Before You Pink. Photo: Creative Commons license.
1991 riot grrrl
The zine riot grrrl popularized the name riot grrrl, a movement within third-wave feminism that uses music, art and literature to explore a range of feminist issues. An earlier zine, Jigsaw, founded in 1988, helped build the riot grrrl community, which includes the American band Bikini Kill, the English band Gossip, and the Russian activists Pussy Riot. Riot Grrrl Collection, New York University.
1991 Tailhook Association
At the annual convention of the Tailhook Association, a fraternal organization for those who work with aircraft carriers, U.S. Navy and Marine Corps officers sexually assaulted 83 women and 7 men. Assistant Secretary of the Navy Barbara S. Pope played a central role in making sure there was a proper investigation. The incident raised public awareness about sexual assault, sexual harassment, and inequalities in the U.S. military. Tailhook '91, PBS Frontline. “Tailhook '91 and the U.S. Navy” by Joslyn Ogden.
1992 Equality Now
Equality Now was founded to promote the human rights of women and girls around the world through coalition building and by documenting violence and discrimination. In 1993, the Women’s Action Network had about 1,000 groups and individuals in 23 countries campaigning on behalf of women. Equality Now.
1992 March for Women’s Lives
The National Organization for Women and other feminist organizations led the March for Women’s Lives in Washington D.C., totaling about 750,000 marchers. Like the 1989 March for Women’s Lives, this one was prompted by a Supreme Court case (Planned Parenthood v. Casey) that would restrict women’s abortion rights. Video, “1992 March for Women's Lives.”Planned Parenthood v. Casey.
1992 Planned Parenthood v. Casey
In 1982, Pennsylvania passed a restrictive abortion law with five regulations. The Supreme Court ruling of Planned Parenthood v. Casey upheld requirements for parental consent, informed consent, and a 24-hour waiting period. The court ruled against the spousal notice rule. Case documents.
1992 Population and the Environment
The Committee on Women, Population and the Environment is a multi-racial organization that examines the relationship between the environment, women’s fertility choices, and population control policies that target specific communities. More recently the organization also focused on immigration rights and what it calls the “greening of hate.” CWPE.
1993 California Breast Cancer Act
The California Breast Cancer Act was passed after grassroots activists, most of them women who had or had survived breast cancer, demanded more resources be spent on research. The Act allocated 45 percent of an increased tobacco tax to breast cancer research and established the California Breast Cancer Research Program. California Breast Cancer Research Program.
1993 Family and Medical Leave Act
Unlike earlier acts, including the Pregnancy Discrimination Act of 1978, this federal law requires that employers provide all eligible employees with unpaid leaves for family or health reasons. The Act has been seen as expanding the legal definition of the family and a major step in balancing the demands of family and work for women and men. Family and Medical Leave Act.
1993 UN Declaration on Violence
In the 1990s, violence against women emerged as one of the global challenges facing communities around the world. The United Nation’s declaration makes a connection between women’s rights, world peace, and the elimination of violence against women. United Nations Declaration.
1994 National Latina Institute
The National Latina Institute for Reproductive Health was founded in 1994 around the core values of “salud, dignidad, y justicia” (health, dignity, and justice). It is the only national reproductive justice organization for Latinas and their families. The group focuses on abortion access and affordability, sexual and reproductive health equity, and immigrant women’s health and rights. National Latina Institute for Reproductive Health. Video, 15th anniversary celebration.
1996 First Nations Women’s Alliance
The First Nation Women’s Alliance evolved out of the Native American Domestic Violence Forum, a committee of the North Dakota Council on Abused Women’s Services. The Alliance worked to support and advocate on behalf of Native women who have experienced violence in their lives. Alliance of Tribal Coalitions to End Violence.
1996 The Vagina Monologues
The Vagina Monologues is a play by Eve Ensler that includes monologues about sexual experiences, sexual violence, body image, women’s experiences of menstruation and other topics. Ensler sought to “celebrate the vagina” as a symbol of women’s empowerment. The play is often performed to raise funds for local women’s anti-violence organizations. Eve Ensler’s website. V-Day. Interview, The Guardian. TED Talk. Photo: The Vagina Monologues at Tufts University, Creative Commons license.
1996 WNBA
The Women’s National Basketball Association (WNBA) was founded by the National Basketball Association in 1996 and began its first season in 1997. The WNBA followed in the footsteps of the Women’s Professional Basketball League (WBL), which played from 1978 to 1981, and the Women’s American Basketball Association (WABA), which played in 1984, 1992 to 1995, and 2002. History of WNBA. Photo: Karima Christmas, Washington Mystics; Creative Commons license.
1997 Khmer Girls in Action
First named HOPE for Girls, Khmer Girls in Action is a women’s reproductive health and empowerment project founded by young Cambodian women. The group expanded its programs to help Southeast Asian women and men in Southern California by advocating for wellness centers and school health clinics. Khmer Girls in Action.
1997 SisterSong
The SisterSong Women of Color Reproductive Justice Collective began as a network of sixteen organizations representing the interests of African American, Arab American, Asian/Pacific Islander, Latina, and Native American women. It was founded to educate women of color about issues of reproductive justice and advocate for public policies in their interests. SisterSong. SisterSong archives.
1997 Third Wave Foundation
In 1992, in response to anti-feminist events, Rebecca Walker wrote a Ms. Magazine article titled “Becoming the Third Wave.” The third wave of feminism concentrates on ending gender violence, expanding reproductive rights, and challenging media misrepresentations of young women. Five years later, the Third Wave Foundation was created to foster youth-led activism for gender justice. Third Wave Foundation records, Duke University.
1998 Have Justice Will Travel
Lawyer Wynona Ward founded Have Justice Will Travel to help battered women and children in rural communities. The group provides women with free legal services, transportation to court hearings, and transition counseling. Have Justice Will Travel.
1999 SAFER
SAFER, Students Active For Ending Rape, was established by Columbia University students to reform their university’s sexual assault policy. The group has evolved into a national organization fighting sexual violence and rape culture on campuses through policy changes. SAFER.
2000 Incite!
Incite! Women of Color & Trans People of Color Against Violence began as a local group who met at a “Color of Violence” conference at the University of California, Santa Cruz, and has grown into a nationwide network of feminists of color working to end violence against women. Incite!Interview with Nadine Naber and Andrea Smith (PDF).
2002 Venus and Serena Williams
In 2002, Venus and then Serena Williams were ranked #1 in women’s singles tennis, with Venus becoming the first black woman to hold the #1 position in the Open Era. The sisters turned professional when they were fourteen years of age, in 1994 and 1995 respectively. Since then they have often dominated both singles and doubles women’s tennis. Both women have won four Olympic Gold Medals. WTA website. Venus Williams website. Serena Williams website.
2003 Indigenous Women’s Health
Indigenous Women’s Health Book, Within the Sacred Circle, edited by Charon Asetoyer, is a groundbreaking book about Native American women’s health issues. Like Our Bodies, Ourselves, the book encourages women to take charge of their health care. Educational Materials, Native Shop.
2005 The Case of Female Orgasm
The Case of Female Orgasm: Bias in the Science of Evolution by philosopher Elisabeth Lloyd is a criticism of androcentric evolutionary accounts of the human female orgasm, which are built on the assumption that female and male sexuality are alike. Lloyd’s work is part of a larger body of scholarship addressing cultural assumptions in human sexuality studies. Elisabeth Lloyd website. “A Critic Takes on the Logic of Female Orgasm” by Dinitia Smith.
2007 Maze of Injustice
Maze of Injustice: The Failure to Protect Indigenous Women from Sexual Violence in the USA is an Amnesty International Report that shows how domestic and sexual violence against Native American women is an international human rights issue. The report was researched and compiled, in part, by Sarah Deer (Muscogee Creek), who won a MacArthur Genius Award for her efforts for Native American women and their communities. Maze of Injustice (PDF). Institute for Native Justice.
2008 Health and Rights Worldwide
The Ms. Magazine Forum on Reproductive Health and Rights Worldwide helped enlarge the international discussion about women’s reproductive health as a basic human right. The forum was called specifically to advocate for an increase in U.S. funding for international family planning programs. Daniel Pellegrom, president of Pathfinder International, was among the participants. Video of forum. Pathfinder International.
2010 UN Women
Officially known as the United Nations Entity for Gender Equality and the Empowerment of Women, this group focuses on the empowerment of women economically and in leadership positions, ending violence against women, and keeping gender equality at the forefront of development initiatives. UN Women. UN Women Watch.
2012 “Lesbiana”
“Lesbiana: A Parallel Revolution” is a film by Myriam Fougère that documents the history of an international clandestine movement of lesbians who sought to live only with women. “Lesbiana” website.
2013 Black Lives Matter
Black Lives Matter is a decentralized movement that campaigns against violence and racism towards black people. It was founded by Alicia Garza, Patrisse Cullors, and Opal Tometi in the wake of the 2012 killing of Trayvon Martin. Website. Interview. Sydney Peace Prize.
2013 End Rape On Campus
End Rape on Campus (EROC) was founded as a survivor advocacy group, to lobby for policy and legislative reforms, and to increase public awareness about sexual violence on campus. EROC helps those who want to file a federal complaint under the provisions of Title IX and the Clery Act. End Rape On Campus.
2013 Gay & Lesbian Sports Hall of Fame
The National Gay and Lesbian Sports Hall of Fame was established in Chicago in 2013 to recognize individuals and groups who have enhanced athletics for the gay and lesbian community. Its 2013 inaugural class had five women out of 17 inductees and in 2014 only three women were included in a class of fifteen. National Gay & Lesbian Sports Hall of Fame.
2013 Know Your IX
Know Your IX is a national campaign run by students and survivors of sexual violence. It aims to empower students to stop sexual violence and educate them about their rights under Title IX. Know Your IX.
2013 Ultraviolet
Ultraviolet is a grassroots organization that has campaigned to influence policies about birth control access, violence against women, pay inequality and other issues, to show “corporations, media outlets, and public officials that there is a clear cost for being anti-woman.” The group’s slogan is “Equality at a Higher Frequency.” One of its first campaigns influenced Reebok to end its sponsorship of rapper Rick Ross because his lyrics celebrate rape. Ultraviolet.
2016 Hillary Clinton wins Nomination
At the Democratic National Convention, Hillary Clinton became the first woman to receive the presidential nomination from a major political party. “When there are no ceilings,” she stated in her acceptance speech, “the sky’s the limit.” Acceptance Speech. | The book is considered a foundational text of the post-World War II women’s movement. Review in NY Times, 1963. Review in The Atlantic, 2013.
1966 National Organization for Women
Seen by its founders as a “NAACP for women,” NOW was established to work independently of government agencies in the effort to increase women’s rights and fight sex discrimination. In 1968, NOW endorsed the passage of the Equal Rights Amendment. NOW. NOW records, Schlesinger Library. Photo, founders of NOW.
1967 Twiggy arrives in New York
When the English model Twiggy arrived in New York City in 1967 she was already an icon with her thinness, bobbed hair, and “Twiggy Dresses.” The “Twiggy phenomenon” lasted long after Twiggy retired from modeling in 1970. Photos, Life Magazine. Twiggy’s website.
1968 Miss America Pageant protest
New York Radical Women organized this protest to bring public attention to sexism, especially society’s ideas about women and beauty. Some of the 400 protestors tossed into a “Freedom Trash Can” items they considered demeaning or oppressive, including women’s magazines, wigs, high heels, and bras. The protest brought media attention to the feminist movement. Video excerpt from “Miss America,” a Clio film.
1968 Myth of the Vaginal Orgasm
Anne Koedt’s classic feminist text The Myth of the Vaginal Orgasm argues that the clitoris, not the vagina, is the center of a woman’s sexual sensitivity and that ignorance about this leads to ideas about women’s frigidity. Koedt drew on the work of Alfred Kinsey and Masters and Johnson. Full text.
| yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | yes_statement | the "sexual" "revolution" of the "1960s" was a "liberating" "moment" for "women".. "women" experienced "liberation" during the "sexual" "revolution" of the "1960s". | https://newdiversities.mmg.mpg.de/new-diversities-24-2-2022-special-issue_theorizing-sexuality-religion-and-secularity-in-postcolonial-europe/brenda-barteling-and-kim-knibbe_why-the-dutch-break-taboos/ | 2022, 24(2) • Brenda Bartelink and Kim Knibbe: Why the Dutch ... | New Diversities • Volume 24, No. 2, 2022
Why the Dutch (Think They) Break Taboos:
Challenging Contemporary Presentations of the Role of
Religious Actors in Narratives of Sexual Liberation*
by Brenda Bartelink (University of Groningen)1 and Kim Knibbe (University of Groningen)
*This work was supported by the Dutch Organisation for Scientific Research (NWO) under Grant 360-25-160. The authors report no conflicts of interest. 1Corresponding author: [email protected]. Both authors should be considered as first authors for this article. The authors gratefully acknowledge the comments and suggestions on earlier versions of this paper by Rachel Spronk, Jelle Wiering and Amisah Bakuri, and the very constructive and detailed suggestions of two anonymous reviewers.
In contemporary approaches to sexual health in the Netherlands, religion and culture are often framed as a source of taboos that need to be broken in order to create more openness around sexuality. This view is often projected onto migrants with a religious background and onto other parts of the world that are ‘still’ religious. In this article, we suggest that one element to developing a more inclusive approach is to question existing narratives of ‘sexularism’ and to acknowledge that both religious and secular actors have historically been involved in the search for better ways of approaching sexual health and sexuality in the Netherlands. In contemporary characterizations of Dutch culture, the sexual revolution is referenced as a time in Dutch history when religious small-mindedness around sexuality was dismantled through a series of transgressive media events. Iconic moments in the sexual revolution have become ingrained in a collective memory of the 1960s as liberation from the firm grip of religion on peoples’ intimate lives. In this article we argue that the contemporary Dutch equation of secularization with openness around sexuality obscures a more complex dynamic between conservative and progressive forces within Dutch religious history. Based on existing research, we show that openness around sexuality was taking shape from within Catholic and Protestant communities and being materialized in new discourses, services and practices around sexuality in the 1950s and 1960s. Frictions between Protestants and Catholics, the clergy and the people, and liberal and conservative circles were part and parcel of some of the iconic moments that are now considered to have shaped Dutch culture.
Keywords: sexual health, religion, secularity, taboo, the Netherlands
Introduction
In 2017, the largest Dutch sexual health organization, Rutgers, celebrated fifty years of its existence. In looking back, its director at the time referenced the origins of its predecessor organization (Nisso) in 1967, in the middle of the sexual revolution:
The revolution led to all kinds of small rebellions and demonstrations. The boundaries regarding what was allowed and what was not were pushed further. The Dolle Mina’s [Dutch feminist group] were influential there. But 1967 was also the year of Phil Bloom, the first woman to appear naked on Dutch television. It led to a lot of commotion, people were fired, and even questions in parliament. Also nowadays, everything concerning sexual freedom and rights is accompanied by incidents, struggle and activism.|2Die revolutie kwam tot uiting in relletjes en demonstraties. De grenzen van wat wel en wat niet mag, werden steeds verder opgerekt. Onder meer door de Dolle Mina’s. 1967 is ook het jaar van Phil Bloom, de eerste blote vrouw op de Nederlandse televisie. Dat leidde tot veel commotie, weggestuurde redactieleden en zelfs Kamervragen. Ook nu gaat alles met seksuele vrijheid en rechten nog gepaard met relletjes, strijd of activisme. (Rutgers, 2017).|
Wiering, who attended the event, noted that this image of the ongoing sexual revolution was accompanied by a tacit construction of religion as a brake on these developments. For example, the presenter of the programme, Sophie Hilbrands, announced that the guest of honour, the King of the Netherlands, Willem Alexander, was coming from another event celebrating five hundred years of Protestantism: ‘so you can imagine how relieved he will be to get here,’ she joked (Wiering, 2020, p. 68).
The framing of religion during this event, as well as the representation of the importance of the sexual revolution as a pivotal moment in the Dutch history of sexual liberation, are staples of the so-called secular frame that is prevalent in Dutch approaches to sexual health and sexual well-being in general. As other authors have outlined, this also informs the ways in which people with a religious and/or migration background are approached. The prevailing construction of the Netherlands is that it is progressive and enlightened in its approach to gender and sexuality, in contrast to those from ‘other’ cultural and religious backgrounds. This is particularly evident in relation to migrants with a Muslim background around issues of gender and homosexuality (Balkenhol et al., 2016; Bracke, 2012, 2011; Bracke and Fadil, 2008; Knibbe and Bartelink, 2019; Mepschen et al., 2010). However, it is also evident in the framing of research among migrant groups on the transmission among them of sexual transmitted diseases, including HIV (e.g. Fakoya et al., 2008; Stutterheim et al., 2011; see Krebbekx et al., 2016 for a critique of how ethnicity is framed as problematic in such research).
The need to break taboos also figures prominently in the public policy documents and public activities of Dutch organizations advocating sexual health. This is evident in the work of the largest such organization, Rutgers. This organization published a small booklet on religion and sexuality remarkably entitled ‘Zwijgen is zonde’ (Ohlrichs and van der Vlugt, n.d.). This title can be read in two ways, namely: ‘staying silent is a sin’ or as ‘staying silent is a pity/a waste’. In the booklet, the authors explain that ‘in some cultures and religions it is taboo to speak about sexuality, even prohibited’ (p. 20). In its international work, Rutgers also frequently refers to taboos around sexuality which have a striking similarity with how they present the need to break taboos vis-à-vis minorities in the Netherlands (Leerlooijer et al., 2011; Vanwesenbeeck et al., 2016).Another influential group of organizations referring to the need to break taboos are the local public health offices of the municipalities in the Netherlands (GGD). In their 2011 policy document, the public health office (GGD) of Amsterdam links the aim of supporting the cultural and religious organizations of migrants to tackling stigma and taboo around HIV/AIDS.|3GGD Amsterdam (2011) ‘GGD Visie op Seksuele Gezondheid’.| The same discourse can be observed with Pharos, a Dutch NGO that works in the area of cultural differences and health that published a toolkit for professionals in 2016 to help them talk about taboos (‘taboe-onderwerpen bespreekbaar maken’), such as sexuality and sexual abuse in migrant communities.|4Pharos (feb. 2016) ‘Toolkit. Seksueel misbruik in migrantenfamilies. Voorlichting aan migranten over seksueel misbruik in de familie. Handreiking voor Hulpverleners’. For a direct reference to the word taboo cf. the press release on the Pharos website: http://www.pharos.nl/nl/kenniscentrum/algemeen/nieuws/868/toolkit-voor-hulpverleners-om-taboeonderwerpen-bespreekbaar-te-maken| References to taboos and the need to break them in relation to religion and sexuality are also observed in Dutch international development discourses and practices (B. Bartelink and Wiering, 2020).
As many scholars have noted, this opposition between religious and secular approaches to sexual well-being and sexual health is unhelpful since it seems to create a choice between ‘progressive’ secular approaches to sexual health and well-being and religious identities and practices. Indeed, critical researchers have pointed out the secularist and culturalist biases in approaches to sexual health and sexual health education (Bartelink, 2016; Bartelink and Wiering, 2020; Rasmussen, 2010; Roodsaz, 2018; van den Berg, 2013). This is not particular to the Netherlands: generally, what are known as evidence-based sexual health practices are not neutral but mobilize particular cultural narratives in their encounters with religion (Burchardt, 2015). Public health institutions that base their interventions on scientific evidence often frame and communicate these interventions in the context of narrative of progress and sexual liberalization (Adams and Pigg, 2005) and dismiss religion as a brake on these developments, something to ‘overcome’. This critique fits in with a broader trend within the interdisciplinary study of religion that calls for a critical examination of the secular and how it is caught up with notions of modernity, portraying those who do not fit into this narrative as ‘not yet modern’, backward, etc. (Balkenhol et al., 2016; Brandt, 2019; Cady and Fessenden, 2013; Knibbe, 2018; Mahmood, 2015; Scott, 2018; Wiering, 2020).
Public health organizations are also increasingly engaging with these critiques. As researchers, we were often involved in debates and discussions with representatives of Rutgers and other sexual health organizations where they were struggling to become more aware of their implicit bias and framing.|5Bartelink, Brenda (2019) Presentation and Panel Discussion at the Johannes Rutgers Dialoog: De strijd om seks (3 October 2019) https://www.rutgers.nl/dialoog; Bartelink, Brenda (2019) Presentation for the International Department of Rutgers, Knowledge Centre on Sexuality, 7 february 2019.| Yet, as we will show, there is still more to do. In order to be able to generate more nuanced views on the role of religion and culture, we propose that it is important to re-examine the Dutch equation between secularization and the sexual revolution as the royal road toward openness around sexuality, and thus as a healthy approach to sexuality. As we will show here, the narratives of the sexual revolution and secularization that the Netherlands went through are in fact more complicated, and religious actors were much more involved in them than is often thought.
In the following, we will trace the post-WWII history of thinking on sexuality in the Netherlands within the networks of organizations that were developed around particular denominational (Protestant and Catholic) identities until the 1960s. Reading through the historical research on how sexuality was discussed within these networks gives us a much more complex picture than the uniform ‘repressive’ moralizing that is often associated with religion in secular discourse in the contemporary Netherlands. After that, we analyse several iconic moments connected to the sexual revolution in the 1960s when taboos were broken. Here too, religious actors could be found on all sides of the controversies that arose around these acts of breaking taboos. Where relevant we will refer to broader developments in the Netherlands and western culture, but our focus is on the role of religious actors and organizations.
In describing these developments we make use of existing historical research. Thus, in a sense the stories we tell here are not new, particularly to historians of religion in the Netherlands. Nor are they exhaustive and complete. Rather, we focus on the ways in which religious actors and networks were changing their discourses and practices around sex before the sexual revolution and how they were involved in some of the iconic moments of this cultural break. This more complex history problematizes and complicates current secular framings of religion as inherently and inevitably conservative in relation to sexuality. We reflect on this in the final section of this article.
Sex and Taboo in Dutch Religious Subcultures
The Netherlands is known for its Christian pluralism and its social organization in terms of different religious denominations and ideological groupings. Although the Catholic community was numerically the largest, the Protestant subcultures were politically and culturally more dominant, while the socialist and the humanist organizations gained ground at the cost of the religious denominations. The existence of powerful religious subcultures in the Netherlands suggests a particular kind of secularity that situates itself within a typology developed by Wohlrab-Sahr and co-authors as secularity for the sake of accommodating religious diversity (Schuh et al., 2012).
How sex was discussed, or rather turned into an area of opaqueness within the Catholic and Protestant subcultures in the early twentieth century, should be seen against the background of a broader project of modernization which came with fundamental changes in views and practices around raising children, as well as with the rise of the modern nation state (Hekma and Stolk, 1990).|6The chapter by Schnabel is particularly relevant in this regard.|
According to these authors, the primary goal of Dutch elites became to safeguard the sexual innocence of children. This led to a lack of sexual socialization even for those who had reached adulthood. At the same time, as a modern nation state the Dutch government became increasingly interested in controlling sexuality and the sexual health of its population. Institutionalizing this control within and via religious communities, sexual opaqueness was undergirded by restrictive Catholic and Protestant moralities.
In the late nineteenth century, some of the medical elite had acknowledged the lack of sexual education in children as problematic and started to research and publish on sexual health. Motivated by projections of exploding population growth developed by the British theologian and scholar Robert Malthus, the New Malthusian Union established in 1881 became an influential organization advocating birth control. From 1901 onwards, the work of the NMB gained more popularity under the leadership of the medical doctor (and former protestant Minister) Johannes Rutgers. Rutgers advocated that contraceptives should become accessible to the general population. Despite this plea (or, as we shall see, sometimes because of it), sexual opaqueness and control was further institutionalized within the Catholic and Protestant subcultures, sometimes as a direct response to the activities of the NMB and other organizations. This has led to a complex dynamic of liberation and control in the history of sexuality within Dutch religious subcultures and consociational networks in the twentieth century. The following two sections will trace these developments for Catholicism and Protestantism respectively. These overviews will necessarily be brief and lacking in precise detail, for which we refer the readers to the works cited.
Catholic Consociational Networks
Within Catholic consociational networks, a strong emphasis on regulating sexuality – always a concern within Catholic theology – became increasingly systematized in theology and in the training of priests during the first half of the twentieth century. Catholic moral theology developed a view of sex as only meant for procreation and opposed the use of contraceptives, following the encyclical Casti Connubii published in 1932. Although a Catholic doctor, Smulders, developed a method of ‘natural’ birth control (the rhythm method) in the 1930s, this method was controversial and generated opposition from Dutch organizations of family physicians and moral theologians. It was decided that his method should not be publicized, let alone published, but only offered as a possibility for women to use after a priest had examined the circumstances of ‘marital life’ in the confessional (Hofstee, 2012, pp. 20-21).
In universities and seminaries, the teachings on preventing any kind of sexual activity that was not geared towards procreation was developed in ever more detail. This emphasis was strongly bound up with the so-called ‘frontier mentality’ of Dutch Catholics: after centuries of discrimination, Dutch Catholics only started to emancipate again in 1853, when the Catholic hierarchy was re-established on Dutch soil. There was a big push after that to gain equal rights and status in the Netherlands. Indeed, for a while Dutch Catholics had a distinct demographic advantage: due to their strong stance on prohibiting birth control and encouraging large families, they grew at a rate that was noticeably faster than other groups in the Netherlands, aiming to become at least half of the population of the Netherlands, up from around 30% (Knippenberg, 1992; Schoonheim, 2005; Van Heek, 1956).
Westhoff, who wrote several important and detailed studies of the changing discourses around sexuality and mental health in Dutch Catholicism, describes the period after WWII as one where the emphasis on regulating sex through moral prescriptions and detailed behavioural guidelines first increased and was then challenged from within Catholic networks of organizations (Westhoff, 1996).|7Much of the following in this section is based on Westhoff’s superb study of the Catholic mental health movement, where sexuality was a primary concern.| The tightening of control occurred partly in reaction to the heady days after the liberation from German occupation. At this time, sexual engagement and pleasure became part of Dutch popular culture through the mixing of soldiers from the allied forces with local girls and through the introduction of popular culture and music from the US and the UK. This led to concerned and disapproving reactions among Catholic clergy and professionals. The regulation of sexuality became an explicit concern in schools, in organizations dedicated to leisure activities and in the training of priests. The focus was on promoting sex (understood as vaginal penetration) within heterosexual marriage, positing a view of procreation as a natural and sacred duty. Any other kind of sex, eroticism or displays of sexuality were prohibited.
This tightening of control created a counter-reaction. Towards the end of the 1950s, mental health professionals within Catholic consociational networks started to express their doubts about the soundness of the Catholic approach to sexuality. Because of their catalysing role in changing the way Catholics thought about sexuality, authority and punishment, the (mostly) men of this movement have been described as ‘spiritual liberators’|8In Dutch: geestelijke bevrijders.| by Westhoff (see the title of her monograph, Westhoff 1996). Psychiatrists treating student priests signalled that many of them developed ‘neuroses’ and traced this to the rules regarding celibacy and masturbation. Furthermore, it was thought that the strong emphasis on preventing sex outside of marriage led to sexual problems after marriage.
According to some prominent figures in the Catholic mental health movement (notably the physician and psychologist Buytendijk), the small-mindedness of Catholic regulations concerning morality was the source of many psychological problems, a higher delinquency among Catholics due to a not fully-grown personal conscience, and a high rate of sexual delinquency. Catholic moral education and the social and spiritual machinery of parish life focused on punishing those who sinned by excluding them from the sacraments, and thus from God’s grace. According to Buytendijk and other spiritual liberators, Catholic moral teaching should focus on inspiring believers to live a morally good life, whereas now the emphasis on rules in fact prevented believers from developing fully as independent adults. Catholic morality as it was enforced at that time, according to the spiritual liberators, mostly inspired a fear of accidentally sinning rather than faith in God’s goodness.
At first, the discussions on Catholic morality concerning sexuality took place behind closed doors. Gradually, however, the censorship of the church loosened, and sexuality and birth control became openly discussed topics in the Catholic media: first some Catholic magazines, later the radio, and finally television. Sexual relations became a subject for mental health, to be addressed by professionals trained in psychology. Within a few years, a paradigm shift occurred within the Catholic community, from combating sins to combating the psychological problems that were thought to cause the sinning.
Importantly, ‘official’ Catholic morality, and more specifically Casti Connubii, the encyclical that had been published in 1930 in response to the promotion of birth control by Neo-Malthusians, remained the primary moral source for the spiritual liberators. For example, they often referred to a ‘sensus Catolicus’, a supposedly typically Catholic receptivity to direction by the Holy Spirit and the Church as the Body of Christ. It was on this ‘sensus Catolicus’ that they relied to make their efforts to improve the mental health of Catholics not just neutral and professional, but a truly Catholic endeavour that would promote the liberating message of Jesus Christ (Westhoff 1996: 314-315).
Nevertheless, Casti Connubii was reinterpreted in such a way that the role of sexual relations in the ‘primary’ (procreation) and ‘secondary’ (a loving relationship) aims of marriage came to be seen quite differently, leading to the conclusion that limiting the number of children, and thus using birth control, could be warranted to safeguard the secondary aim of marriage. The aim of a loving relationship was also emphasized and elaborated theologically in the ‘Nieuwe Cathechismus’ (new Catechism) in the light of God’s love (van den Bos, 2021). Although this Catechism was later redacted and a newer version has been in use officially since 1992, it is often still cherished by Catholics who came of age at the time of its publication, as it shaped their views on how religion and sex could be understood as imbued with love and pleasure.
In time, lay Catholic professionals implemented these ideas in the many institutions and organizations of the Catholic community: from kindergarten to university, from the first experiments with co-education to the re-organization of the training of priests in open institutes mixed with the training for lay pastors. Furthermore, due to radio and television, ‘the public’ at large was also drawn into the discussion. And since it concerned issues very close to their heart, this public listened avidly: the radio shows of a Dr Trimbos were especially important here. A key moment, moreover, was the pronouncement of Bishop Bekkers during a television broadcast that he thought that contraception should be a matter for personal conscience (van Schaik, 1997, p. 347). This was revolutionary, since it removed an entire area of life out of the control of the Catholic Church.
The Protestant Subcultures
Whereas Dutch Catholics were quite concerned with the need to operate as one block in the Dutch religious and political landscape, Dutch Protestantism is notoriously prone to splitting. Thus, the following should be read as quite a broad sketch of some of the developments within the main strands of Protestantism in the Netherlands: the ‘Hervormden’ and the ‘Gereformeerden’, which internally were also quite disparate in their views and practices.
Within these communities, in the first half of the twentieth century an understanding of sexuality emerged that focused on its connection to love rather than procreation. This development took place against the background of broader shifts in the Protestant churches, whereby the social role of the church was emphasized over its role to safeguard certain moral standards and dogmas.|9The Hervormde Raad voor Kerk en Gezin (Reformed Council for Church and Family) was established to support families affected by WWII and the colonial wars in the Dutch East Indies.| The Lambeth conference in 1930, in which the Anglican Church in England had accepted contraceptive use, also influenced Protestant approaches to sexuality in the Netherlands. Arguments that contraceptives were problematic because they were artificial were soon rejected within more liberal circles. Family planning was therefore accepted as a practice even before artificial contraceptive use became widely available. Over the years, the idea that the planning and spacing of children was part of being a responsible parent and spouse became firmly rooted within many Protestant communities.
A new Protestant view on sexuality was introduced by the Hervormde medical doctor Felix Dupuis, a sexual liberator in the Protestant community. Dupuis had become aware of the sexual health needs of young women in particular through his experience with the death of a young woman after a self-induced abortion during WWII.|10Dupuis and his colleagues found out later that girl in question had not been pregnant in the first place (Hageman 2007).| Dupuis published a widely sold book on sexual health in 1947, introducing a positive view on sexuality with reference to the Bible book Song of Songs. Following this, the Dutch reformed synod published a pastoral letter on marriage (Herderlijk Schrijven over het Huwelijk) in 1952 that argued against the common association of the sexual with sin and affirmed sexuality as a gift from God (Bos, 2010).
Within the Gereformeerde community, there was a stronger emphasis on sin in relation to sexuality, yet the need to educate people about sexuality was also noted. Influenced by articles written by the Gereformeerde theologian Waterink and medical doctor Drogendijk, who collaborated with Dupuis, issues regarding sexual health and well-being came to be addressed directly rather than silenced or ignored (Drogendijk, 1952).|11Cf. A.C. Drogendijk Man en Vrouw voor en in het huwelijk; Een boek over het seksuele leven voor verloofden en gehuwden, was first published in 1941. The 1964 version is significantly different in giving more attention to love within marriage in view of ‘de vorming van een gelukkig huwelijksleven.’| For married couples, the most visible change occurred when the Synod of the ‘Gereformeerde Kerken’ accepted contraceptives in 1963 (Vellenga, 1995). The Hervormden were much earlier than the Gereformeerden in their formal ecclesiastic response to the sexual and reproductive questions that were emerging in the Protestant communities (Bos, 2009). In addition, the notion of sin and the illegitimacy of sexuality continued longer for the Gereformeerden because of their understanding of marriage as a metaphor for humankind’s unity with God and with the nation (Drogendijk et al., 1961).
The Protestant liberators, Dupuis, Waterink and Drogendijk, saw sexuality as part of human flourishing, expressing its relationship to the transcendent (God) and to the immanent (society) (Drogendijk et al., 1961).|12Cf. the contributions of the Gereformeerde members of the working group on sexuality education of the Nederlands Gespreks Centrum, a foundation to improve communication between diverse groups and communities in Dutch society. Pp. 7 discusses sexual development and growth as essential to human development within the context of marriage.| Eroticism was seen as an essential element in a healthy marriage. Like the views of the Catholic spiritual liberators, this positive view of sexuality was still confined within a traditional morality. Marriage was seen as a spiritual and legal frame in which sexuality was practised. Children were an expression of the spiritual and physical unity within marriage, but procreation was not seen as the main purpose of marriage. This more liberal understanding of sexuality became institutionalized when, five years after the publication of the pastoral letter, the Protestantse Stichting voor Verantwoorde Gezinsvorming (PSVG) was established under leadership of Dupuis. Together with the Hervormde Raad voor Kerk en Gezin (Dutch Reformed Council for Church and Family) and the Nederlandse Vereniging voor Seksuele Hervorming (Dutch Foundation for Sexual Reform), the PSGV played a crucial role in changing medical law in 1966, which decriminalized the public selling and advertising of contraceptives. When the PSVG started Protestant counselling centres that offered medical, pastoral and social support to Protestant families, it became an accepted institution for family-planning services able to reach the Protestant constituencies that did not easily access the services offered by the secular organization, the NVSH.
Oscillating Between Control and Liberation
In summary, within both Catholic and Protestant subcultures, tendencies toward liberation and tendencies toward exercising stronger control over sexuality can be observed in the post-war period. Among Protestants, the family-planning movement emerged independently of the mental health movement that was such an inspiration for the Catholic liberators and was less controversial. Yet, even before PSGV became part of the National Protestant Centre for Mental Health in 1966, there are important similarities in how the understanding of sexuality changed within Catholic and Protestant circles. The similarity is particularly evident in how promoting knowledge on sexual health emerged out of the growing importance of pastoral approaches and the professionalization of spiritual care. The then dominant moral, dogmatic understanding of sexuality was questioned, while awareness of the body, health and emotions increased. In Catholic circles this was explicitly referred to as the breaking of taboos. This shift also enabled a more material and technical approach to sexuality.
One possible difference between the Protestant and the Catholic subcultures is the extent to which liberators were developing their thinking in dialogue with or in opposition to their respective churches. The representatives of the Catholic hierarchy in the Netherlands initially refrained from making any public statements on the use of family planning methods. Archbishop Alfrink was waiting for the conclusions of the Second Vatican Council that was ongoing at the time (van Schaik, 1997). However, in 1963 the popular Bishop Bekkers foreclosed internal dilemmas by declaring on public television that couples themselves should decide how many children to have.
This statement had a profound influence on Catholics in the Netherlands, who welcomed the emphasis on individual consciousness, rather than the strong role of the church in regulating people’s lives (Knibbe, 2013, Ch. 3). It also established the role of Dutch Catholicism as leading in progressive reforms. Worldwide, developments in Dutch Catholicism were seen as noteworthy. The pastoral Council of Noordwijkerhout (1966-1970) was seen as a particularly notable process, challenging priestly celibacy and suggesting that women should be allowed to enter the priesthood, among many more radical changes to reform Catholicism from a hierarchical institution to a broad movement finding its way to God (‘Gods volk onderweg’) (Coleman, 1978).
These developments in Dutch Catholicism were cut short by the publication of the papal encyclical Humanae Vitae (1968) during the pastoral council in Noordwijkerhout, which re-established procreation as the primary aim of marriage and explicitly forbade the use of artificial contraceptives. In retrospect the publication of this encyclical signalled the start of a Rome-led conservative turn within the Dutch church hierarchy during the 1970s, when two outspokenly conservative bishops were appointed. This conservative turn within the Dutch Catholic hierarchy produced a lasting polarization among Dutch Catholics more broadly. In reaction, many Catholics who had embraced Bishop Beckers’ message of autonomy no longer accepted the authority of the Catholic Church in the area of sexuality, family and relationships, even mocking priests who did try to re-establish these norms as attempting to ‘turn back the clock’ (Knibbe, 2013, Chs. 3, 4).
Because family planning and contraceptives were less problematic in Protestant circles, there was more space for conversation on matters of sexual well-being within the Protestant community and within the different Protestant churches. This difference between Catholic and Protestant subcultures included a sense of the self-fashioning of Protestants as responsible parents vis-à-vis allegedly conservative and backward Catholics. Protestants frowned upon Catholic moralities around family planning, criticizing the poverty trap it created for working-class families (Bos, 2009; Mulder, 2013).|13The earlier mentioned Hervormde pastoral letter, for example, firmly rejects an understanding of sexuality as exclusively focused on reproduction, which at the time was a clear reference to Catholic morality. Mulder (2013) refers to criticisms by a minister, Jan van Boven, also noted in a personal reflection by van Boven published on the website of the Condomerie in Amsterdam under the title: Church and Condom https://condomerie.com/condomologie/condoomhistorie/kerk-en-condoom|
The increasing openness towards sexuality and family planning within the Catholic and Protestant communities gave rise to contestations over sexuality within and between various Catholic and Protestant subcultures. Yet, as Kennedy has noted, there was a strong sense that the whole of the Netherlands was moving towards a radical break with hierarchical cultural values (Kennedy, 1995). In this cultural revolution, secular and religious actors both played a role on both the progressive and the conservative sides of the equation. These dynamics informed some of the most iconic events of the sexual revolution, which we will discuss in the next section. Around these events, a polarization emerged between liberal and more conservative religious moral approaches to sexuality in the 1960s and 1970s.
Breaking Taboos in the 1960s
Two issues in particular have become something of a ‘litmus test’ for migrants to become ‘culturally Dutch’: public nakedness, in particular naked breasts, and homosexuality. Not coincidentally, both topics figure prominently in the integration video for migrants who wish to immigrate to the Netherlands from so-called ‘non-western’ countries. This is the film called ‘Naar Nederland’, part of the lesson materials for the basic integration exam for migrants (Balkenhol et al., 2016; Bracke, 2012; Butler, 2008). It exists in both redacted (‘gekuist’, literally ‘chaste’) and unredacted (unchaste) versions because of the nudity and imagery depicting homosexuality.|14https://www.naarnederland.nl/lesmateriaal accesses 19-07-2022| In the following, we focus on two key events of the sexual revolution that are often seen as the origin point for the cultural changes now presented as the dominant cultural norm in the Netherlands and show how religious actors were in fact involved on all sides of the controversies generated. Both were broadly publicized and stretched out over a period of time.
The first concerns the controversy around the first naked woman to appear on TV in 1968 (Kennedy and Kennedy-Doornbos, 2017). In the first broadcast of the TV programme Hoepla, arts student and model Phil Bloom walked behind a musical performance wearing a very short flowery garment that covered her breasts and genitals.|15The programme was made by four artists, all related to an international pop-art movement called fluxus.| The second episode made the international news because she was now fully nude. Not insignificantly, Hoepla was broadcast by the liberal Protestant broadcasting service VPRO In addition, initially Phil Bloom was supposed to hold the Protestant Christian newspaper Trouw in front of her.
When this plan became publicly known, it generated so much controversy that the scene was changed. When the second episode was broadcast, Bloom appeared instead reading an article in the Social Democrat newspaper Het Vrije Volk, which covered the controversy following her performance in the first episode. After reading, she lowered the newspaper, fully exposing her breasts to the audience. This broadcast was not only covered in the international media, it also led to questions in Parliament, notably from the conservative Protestant Christian party SGP. Responding to the controversy, the board of the VPRO, chaired by liberal Protestant minister Ad Mulder, decided to terminate the programme before the series was finished.
Phil Bloom became an icon of the sexual revolution at the time, the first naked woman on TV in the Netherlands. In current cultural representations, it is still an important symbol of a break with opaqueness around the topic of sex and the body, as we saw in the reference the director of the largest Dutch sexual health organization made to it in celebrating fifty years of the organization’s existence. The initial plan to have her hold the Trouw in front of her shows how the juxtaposition of the naked female body with Protestantism was used to generate ‘shock value’. What is particularly interesting is that the VPRO, a broadcasting service rooted in liberal Protestantism, intentionally framed a certain kind of Christianity as a hindrance to the liberalization of sexual morality throughout the publicity that surrounded it.
Another iconic event where progressive and conservative religious trends became visible was the blasphemy trial against writer Gerard van ‘t Reve (later Gerard Reve) (Bos in Andeweg, 2015). As an openly gay Catholic convert, he was the living embodiment of the progressiveness in sexual matters that was then being pioneered among some Dutch Catholics, especially in Amsterdam and around the pastoral council of Noordwijkerhout.
Although Reve did not at first emerge as a Catholic intellectual, since he converted at a later stage, he was an intellectual, was openly gay, and after his conversion wrote explicitly as a Catholic. As an author, he was also the embodiment of the desire to break with the stolid 1950s. His first novel, de avonden, described the suffocating atmosphere of this period, was summarized in the evocative word spruitjeslucht (the smell of Brussels sprouts), which is still often used in relation to the suffocation that many Dutch associate with religion and the mentality of the 1950s.
In his writing, Reve creatively combined characteristics in a way that often shocked people, and that certainly grabbed attention and provoked debate. The trial on blasphemy followed a publication by Reve in which he imagines himself having anal sex with God represented as a donkey. At first, this text did not generate a lot of attention. It was only after Reve was criticized by a Catholic priest and a protestant minister that the content of this publication became known among a broader audience. Paradoxically, the pastors who criticized Reve were progressives who had organized the first pastoral counselling groups for homosexuals in the Netherlands. They feared Reve’s provocative writings would cause a conservative backlash. This fear, expressed in a public letter, was borne out by I.R. van Dis, an MP for the conservative Protestant party the SGP, to urge the Minister of Justice to request an investigation.|16A central figure in this was the medical doctor and historian G.A. Lindeboom, professor at the Gereformeerde VU University. Rather than criticizing Van het Reve, Lindeboom criticized liberal theologians who had defended the writer. Cf. Bos (2015) and (Lindeboom, 1967).|
The blasphemy trial that followed was central in controversies on religion and homosexuality in the 1960s, and as such it has played a crucial role in the construction of liberal sexuality as symbol of secularism in the Netherlands. Reve was found not guilty, mainly because it could not be proved that he intended to blaspheme. His main argument was that he was himself a Catholic and that he expressed his beliefs in God as he believed, therefore he was exercising his right to freedom of religion, and the case was merely a way of making one understanding of God more important than another (e.g. Jansen 2017). Following the Supreme Courts’ decision, blasphemy practically became inapplicable as an offense, which was repeatedly stressed as a corrosion of the blasphemy law by Christian political parties (ibid.). In the literary writings Reve produced during the period of the trial, which were published afterwards, critics have observed how Reve created an image of a highly personal God that reflected his own homosexual lifestyle (Batteau 2022).
Having liberated themselves from restrictive Catholic sexual morality, some Catholic intellectuals embraced Reve because his work allowed for a representation of Catholic culture in which morality might seem restrictive but in practice allowed for a lot of freedom (Andeweg, 2015). Some Catholics argued that his text was in the tradition of Catholic mysticism, of becoming one with God.
In Dutch public memory, this trial is the moment when two taboos were broken simultaneously: the taboo on homosexuality and the taboo on blasphemy. After this trial, the law on blasphemy was in effect a dead letter. In particular in discussions around the Danish cartoon crisis, this cultural moment is often referenced as exemplifying the triumph of the right to free speech over religious sensitivities.
Discussion
As we outlined in the introduction, in the Netherlands, as in many other western nation states, there is a particular understanding of secularism as promoting and protecting sexual freedom. This understanding, also referred to as ‘sexularism’, has invited fierce criticisms from scholars, most notably the historian Joan Scott, who demonstrated that historically, secularism in fact rested on the exclusion of both women and religion from the public domain, thereby naturalizing what was a gendered and unequal social order (Scott, 2018, 2011). As Cady and Fessenden have noted, the religious hold over sexuality can be analysed as a feature of secular rule. Not only the religious, but also the secular has settled on sexuality as one of the primary domains through which contestations take place (Cady and Fessenden, 2013, p. 8). This is evident in the Phil Bloom event, which precipitated the eventual secularization of the liberal Protestant broadcasting service, the VPRO. At the same time, it is important to acknowledge that secularism has provided many opportunities for questioning and breaking free of such control, thereby extending freedoms (ibid.).
Our focus in this article has been on a different phenomenon, namely how the histories of different Dutch religious subcultures, as well as several iconic events associated with the sexual revolution, demonstrate that religious actors were involved on all sides of the controversies around sexuality. Mass media, literature and the arts became vehicles for the celebration of liberation, resulting in powerful images that shape the collective memory and the historiography of the 1960s until today (Buelens in Andeweg, 2015). These events were produced in the context of a dynamic between progressive and conservative tendencies within religious subcultures and the broader cultural revolution of the 1960s.
This nuance is often forgotten in contemporary debates, whenever the sexual revolution is remembered as a time when the Dutch shook off the suffocating shackles of religion. The mediatisation and thus amplification of the religious voice as the conservative position vis-à-vis a secular liberal progressive agenda in the 1960s and beyond has created a blind spot regarding how the reforms in the 1960s were in fact supported rather than rejected by many Christians. Notably, many of the more ‘liberal’ policies introduced in the 1970s and 1980s were in fact adopted by the government while Protestant and Catholic politicians were in power (Kennedy and Kennedy-Doornbos, 2017).
As we outlined in the introduction, this association of religion with taboos and restrictive attitudes around sex is repeatedly consolidated in research into sexual health and in public debates, particularly in relation to Muslims with a migration background. Although a greater reflexivity is emerging, this association remains quite strong, especially when it remains implicit.
Indeed, the sexual revolution was the start of major transformations in Dutch society, and it took place in parallel to a process of rapid dechurching. While starting much earlier from both secular movements and intellectuals and within the Catholic and Protestant communities, there is no doubt that liberation and development towards sexual openness gained momentum in the 1960s (Schnabel, 1990). For a substantial part of the younger generation, the solid connections between sex, love, marriage and reproduction that had characterized the modern approach to family and relationships became much looser. The newly emerging infrastructure for sexual health (both secular and religious) played an important role in making contraceptives, particularly the pill, widely available and widely used in a relatively short period of time. In addition, this broader availability of contraceptives made possible a broader recognition and acceptance of sex and pleasure as avenues towards personal development and liberation even among the general population who had not been part of these emancipatory movements.
Scholars and activists have also pointed out that this ‘liberation’ has had mixed results, or could be called incomplete. Sociological, public health and historical research correct or even debunk the myth of sexual liberation. While new structures governing sexuality emerged, older ones continued. Cases in point are the emphasis on romantic love that resulted in people marrying at a younger age, and women’s continued experiences of sexual and gender-based violence during and after this period (Hekma and Stolk, 1990).|17While until the 1980s marriage continued to be important, it was an expression of choice and no longer part of family negotiations. After the 1980s there was a qualitative change visible in marriage and relationships.| Rather than having been liberated since the 1960s, sexuality is still shaped in the context of gender inequality, while it has also become part of new regimes of gendered and racialized power differences, as several scholars have noted (Balkenhol et al., 2016; Bartelink and Wiering, 2020; Knibbe, 2018; Roodsaz, 2018).
As is evident in the literature, as well as in some of the other contributions to this special issue, the too easy assumption that a religious background is a burden when it comes to a healthy approach to sexuality contributes to the production of differences that intersect with other unwanted racial and gendered differences in ways that hinder inclusive approaches to sexual health and well-being.
We have suggested that one, often overlooked element in developing a more inclusive approach is to question existing narratives of Dutch ‘sexularism’ and to acknowledge that both religious and secular actors have been part of the search for better ways of approaching sexual health and sexuality in the history of the Netherlands. This will hopefully generate more curiosity regarding the unexpected and variable ways in which religion, culture and tradition become sources for shaping one’s own sexual well-being.
Note on the Authors
Brenda Bartelink is a senior researcher in the anthropology and sociology of religion. Her research focuses on how people navigate culturally and religiously diverse contexts in and between Europe and Sub-Saharan Africa. She has published on the interrelations of religion and secularity with gender, sexuality, health and forms of care. Her work has been published in The Review of Faith and International Affairs and Religion and Society: Advances in Research, as well as in peer-edited volumes. Email: [email protected]
Kim Knibbe is Associate Professor Anthropology and Sociology of Religion. She has recently completed the project “Sexuality, Religion and Secularism” with Rachel Spronk (funded by NWO). Previous research focused on Catholicism and spirituality in the Netherlands and on Nigerian Pentecostalism in Europe and the Netherlands. She has also published a series of theoretical and methodological reflections on studying lived religion. Email: [email protected]
Wiering, J.O. 2020. Secular Practices: The Production of Religious Difference in the Dutch Field of Sexual Health. Unpublished PhD Thesis, University of Groningen.
Search
Search for:
New Diversities is a peer-reviewed and open-access interdisciplinary journal seeking to promote debate and conceptual development around issues of diversity and social difference. It welcomes submissions, in both traditional and innovative form, that link substantive analyses with critical perspectives on the modes, politics, representations and consequences of social differentiation. | For a substantial part of the younger generation, the solid connections between sex, love, marriage and reproduction that had characterized the modern approach to family and relationships became much looser. The newly emerging infrastructure for sexual health (both secular and religious) played an important role in making contraceptives, particularly the pill, widely available and widely used in a relatively short period of time. In addition, this broader availability of contraceptives made possible a broader recognition and acceptance of sex and pleasure as avenues towards personal development and liberation even among the general population who had not been part of these emancipatory movements.
Scholars and activists have also pointed out that this ‘liberation’ has had mixed results, or could be called incomplete. Sociological, public health and historical research correct or even debunk the myth of sexual liberation. While new structures governing sexuality emerged, older ones continued. Cases in point are the emphasis on romantic love that resulted in people marrying at a younger age, and women’s continued experiences of sexual and gender-based violence during and after this period (Hekma and Stolk, 1990).|17While until the 1980s marriage continued to be important, it was an expression of choice and no longer part of family negotiations. After the 1980s there was a qualitative change visible in marriage and relationships.| Rather than having been liberated since the 1960s, sexuality is still shaped in the context of gender inequality, while it has also become part of new regimes of gendered and racialized power differences, as several scholars have noted (Balkenhol et al., 2016; Bartelink and Wiering, 2020; Knibbe, 2018; Roodsaz, 2018).
| no |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | no_statement | the "sexual" "revolution" of the "1960s" was not a "liberating" "moment" for "women".. "women" did not find "liberation" during the "sexual" "revolution" of the "1960s". | https://www.baltimoresun.com/opinion/op-ed/bs-ed-op-0330-mgoldberg-sex-positivity-20220329-bycmlwcch5hrxly7jatp5lrfsy-story.html | Michelle Goldberg: A modern manifesto against sex positivity ... | This website stores data such as cookies to enable essential site functionality, as well as marketing, personalization, and analytics. By remaining on this website, you indicate your consent. Cookie Policy
Several things played into the changing attitudes and values that spurred the sexual revolution of the 1960s. The invention of the birth control pill and open conversations about women’s sexual liberation encouraged the normalization of extramarital and casual sex. Here, a model gets her body painted in Los Angeles in 1967. (Michael Ochs Archives / Getty Images)
[Pictured: Models get their bodies painted for a "paint-in" in Los Angeles, 1967.]
(Michael Ochs Archives/Getty Image)
Almost exactly a year ago, writer Katherine Dee, who blogs about internet culture and trend forecasting, predicted what she called a “coming wave of sex negativity.” Sex positivity, she suggested, had created new stigmas, including around discussing the harms of sex work and self-commodification. “People do not want to be atomized,” she wrote, adding, “Nobody wants this dystopia.”
Not everything Ms. Dee foresaw — like a shift toward earlier childbearing among the upper-middle class — has come to pass, at least so far. But she nailed an emerging movement, one that now has a manifesto in “Rethinking Sex: A Provocation” by Washington Post columnist Christine Emba, which I found bold and compelling even when I disagreed with it. Ms. Emba’s argument is that sexual liberation, as currently conceived, has made people, and especially women, miserable. It has created, ironically, new strictures and secret shames, at least in certain elite milieus, around “catching feelings,” hating casual sex and having vanilla sexual tastes.
Advertisement
One anecdote from the book illustrates the perversity, so to speak, of the current moment. Ms. Emba describes meeting a woman at a Washington party who tells her about the man she has been dating. In most ways, he’s great. “But he chokes me during sex” the woman confides. She had consented, but she didn’t like it. She was so unsure about whether her feelings were reasonable that she turned to Ms. Emba, a stranger, for advice. “The taboo on questioning someone else’s sexual preference was that strong,” writes Ms. Emba. Her book is aimed, in part, at breaking that taboo.
Ms. Emba is a heterodox thinker, and it’s hard to situate her book ideologically. As she writes in the introduction, she was raised evangelical, converted to Catholicism in college and spent her early adulthood planning to save sex for marriage before eventually letting go of abstinence. Her worldview, she writes, has “pingponged a bit, from purity culture to a rebellion against it to something in between.”
Advertisement
“Rethinking Sex” speaks the language of both radical feminism and traditional Christian ethics; it quotes Ellen Willis and Thomas Aquinas, Andrea Dworkin and Roger Scruton. Ms. Emba critiques sex positivity, at least in its popular form, as submission to patriarchal capitalistic values, but there’s also a strong streak of conservatism in her work. Among her chapter titles are “Our Sex Lives Aren’t Private” and “Some Desires Are Worse Than Others.”
To Ms. Emba, modern heterosexual dating culture appears to be an emotional meat grinder whose miseries and degradations can’t be solved by ever more elaborate rituals of consent. Now, I write this as an outsider, having married young. But the stories I hear from many of my friends match those Ms. Emba tells, and there’s plenty of empirical data about growing romantic loneliness and alienation. Fewer adults have live-in partners than in recent decades, and young people, despite their apparent panoply of options, are having less sex. “In different ways, both genders have lost confidence in their ability to be together — they no longer know how to do it correctly, or if it’s even possible,” Ms. Emba writes.
As a step toward a solution, she proposes replacing a transactional approach to sex with an ethic of what Aquinas called "willing the good of the other," or determining to act in one's partners' best interests. This sounds nice in theory, but often, heterosexual women are too willing to act in what they believe to be their partner's best interests, rather than their own. The woman who confides to Emba about choking surely thinks she's doing something good for her partner by indulging him.
The problem (and I doubt Ms. Emba would disagree with this) is that many women are still embarrassed by their own desires, particularly when they are emotional, rather than physical. She writes that sex positivity “champions the primacy of appetite — our wants are above reproach and worthy of fulfillment, no matter what.” Her book, however, is full of examples of people suppressing their longings. She interviews many women who seem to feel entitled to one-night stands, but not to kindness. What passes for sex positivity is a culture of masochism disguised as hedonism. It’s what you get when you liberate sex without liberating women.
Michelle Goldberg (Twitter: @michelleinbklyn) is a columnist for The New York Times, where a longer version of this piece originally appeared. | This website stores data such as cookies to enable essential site functionality, as well as marketing, personalization, and analytics. By remaining on this website, you indicate your consent. Cookie Policy
Several things played into the changing attitudes and values that spurred the sexual revolution of the 1960s. The invention of the birth control pill and open conversations about women’s sexual liberation encouraged the normalization of extramarital and casual sex. Here, a model gets her body painted in Los Angeles in 1967. (Michael Ochs Archives / Getty Images)
[Pictured: Models get their bodies painted for a "paint-in" in Los Angeles, 1967.]
(Michael Ochs Archives/Getty Image)
Almost exactly a year ago, writer Katherine Dee, who blogs about internet culture and trend forecasting, predicted what she called a “coming wave of sex negativity.” Sex positivity, she suggested, had created new stigmas, including around discussing the harms of sex work and self-commodification. “People do not want to be atomized,” she wrote, adding, “Nobody wants this dystopia.”
Not everything Ms. Dee foresaw — like a shift toward earlier childbearing among the upper-middle class — has come to pass, at least so far. But she nailed an emerging movement, one that now has a manifesto in “Rethinking Sex: A Provocation” by Washington Post columnist Christine Emba, which I found bold and compelling even when I disagreed with it. Ms. Emba’s argument is that sexual liberation, as currently conceived, has made people, and especially women, miserable. It has created, ironically, new strictures and secret shames, at least in certain elite milieus, around “catching feelings,” hating casual sex and having vanilla sexual tastes.
Advertisement
One anecdote from the book illustrates the perversity, so to speak, of the current moment. Ms. Emba describes meeting a woman at a Washington party who tells her about the man she has been dating. In most ways, he’s great. | yes |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | no_statement | the "sexual" "revolution" of the "1960s" was not a "liberating" "moment" for "women".. "women" did not find "liberation" during the "sexual" "revolution" of the "1960s". | https://www.goodreads.com/en/book/show/59852733 | The Case Against the Sexual Revolution: A New Guide to Sex in the ... | The Case Against the Sexual Revolution: A New Guide to Sex in the 21st Century
Ditching the stuffy hang-ups and benighted sexual traditionalism of the past is an unambiguously positive thing.
The sexual revolution has liberated us to enjoy a heady mixture of erotic freedom and personal autonomy.
Right? Wrong, argues Louise Perry in her provocative new book.
Although it would be neither possible nor desirable to turn the clock back to a world of pre-60s sexual mores, she argues that the amoral libertinism and callous disenchantment of liberal feminism and our contemporary hypersexualised culture represent more loss than gain.
The main winners from a world of rough sex, hook-up culture and ubiquitous porn - where anything goes and only consent matters - are a tiny minority of high-status men, not the women forced to accommodate the excesses of male lust.
While dispensing sage advice to the generations paying the price for these excesses, she makes a passionate case for a new sexual culture built around dignity, virtue and restraint. This countercultural polemic from one of the most exciting young voices in contemporary feminism should be read by all men and women uneasy about the mindless orthodoxies of our ultraliberal era.
Author says she moved on from her “liberal” views as she matured. Great direction, and I love the message, but - influenced by the godless elitist portion of British society in which she grew up, and where I lived for many years - she doesn’t seem to understand that without God as a primary source, nothing can have real meaning, especially when it comes to our fundamental relationships.
Even her practical advice remains untethered, because she has been taught (by her friends and social environment) to reject what she calls “religion and tradition”.
Writing with great intelligence, she struggles to build the right type of framework (marriage is good, porn is bad, etc.) from scratch and she gets to so many great conclusions that, without any metaphysical context, remain suspended in mid-air. This leaves her with a shell without an egg. Almost like someone who throws away the instructions, then works like crazy for 10 years to make up her own instructions, and then ends up with a book of instructions that “kind of sounds” like the original but is not: she still doesn't want to connect with the manufacturer, who is the only one who can give her the key to make the product work.
This way, all remains nothing more than an empty shell. Everything remains at surface level, without any anchor to the deepest hearts of women and men.
Sorry. You can’t write off God. He’s the fabric of our existence. We are wired to worship, and our hearts can only rest when they rest in Him.
No wonder the two strongest pages of the book are the ones where Perry quotes C.S. Lewis and G.K. Chesterton.
This is a very well-crafted book, that carries such an important message. But it lacks depth.
Example: an element that could have been touched with more depth in the book is the concept of "freedom".
"Sexual disenchantment is a natural consequence of the liberal privileging of freedom over any other value, because, if you want to be utterly free, you have to take aim at any kind of social restriction that limit you, particularly the belief that sex has some unique, intangible value. "
What is this "freedom"? In the modern culture, this term is used to mean "the ability to do whatever pleases me", because we live in a society of teenagers. But for many, many centuries, "freedom" has meant something completely different. Within a religious context, "freedom" is the ability to become the best version of ourselves (to become saint) without impediments. Kind of different, right? And yet, that is the only freedom that a mature adult should strive for.
" From this belief in the specialness of sex comes a host of potentially unwelcome phenomena, including patriarchal religious systems. But when we attempt to disenchant sex [...] then there is another kind of cost."
To paraphrase: the sacrality that religions assign to sex actually sounds like the "right thing to do", but… ugh! I've been brought up, like my whole generation, thinking that "patriarchal religious systems" are the bad guys! How can I reconcile that?
You could reconcile that by trying a little harder to understand what religion actually teaches. The fact that some rotten apples within the Church have committed atrocities is often quoted to deny the value of religion, but that's intellectually dishonest. It's also completely irrelevant to this argument (I'm not saying this is Perry's point, because it is not, but it's a very fashionable retort to Christian faith).
As another example of misconstruing religion, Perry quotes evolutionary biologist Sarah Blaffer Hrdy: " We are not ready-made out of somebody's rib. We are composites of many different legacies, put together from leftovers in an evolutionary process... etc. "
Hrdy probably ignores that the Church (both the Catholic and the Anglican ones) fully embrace evolutionary theories. Not only those, but also the Big Bang theory, which was formulated by a catholic priest.
Why am I going on about religion? Isn't it beside the point that this book is trying to make?
Not at all.
In fact, it should be the entire point.
If we really tried to apply "love of God, neighbor and self", as the main gospel commandments, this would be more than sufficient to solve every problem outlined in this book. “A really successful marriage takes 3 people”, Fulton Sheen used to say: “You, your spouse, and God”.
Toward the end, Perry inches close to this conclusion, when she talks about a vague "moral intuition". Fair, but moral intuition is such a vague thing, it’s almost useless.
Religion and the Church have never had a worse reputation than today. I know that.
But look into it more deeply, more seriously, because they do hold the key to our true happiness.
This book is essentially a 'gender critical' feminist manifesto, dressed up as evolutionary psychology, i.e. the science of rationalising personal biases through inductive reasoning about human nature; which can, will be, and has been used to support polar opposite positions. The author is not a radical feminist in the traditional sense, but she quotes and references plenty of radfems and this will take you through all the standard talking points of contemporary bioessentialist, anti-gender, anti-queer discourse — males and females have different brains; males are inherently more violent than females; sex work should be criminalised; BDSM is pathological; trans women don't belong in sports, being in all ways physically superior to 'natal women' [yes, she uses that specific terminology], etc. etc. If you're hearing these ideas for the first time, they might appear to be made in good faith and well-intentioned. They're really not.
To gain some perspective on the intellectual milieu this author is operating within, it's worth noting some of the titles in the goodreads 'readers also enjoyed' list:
- Trans: When Ideology Meets Reality by Hellen Joyce: 'Gender identity ideology is about more than twitter storms and using the right pronouns. In just ten years, laws, company policies, school and university curricula, sport, medical protocols, and the media have been reshaped to privilege self-declared gender identity over biological sex. People are being shamed and silenced for attempting to understand the consequences of redefining "man" and "woman".'
- Material Girls: Why Reality Matters for Feminism by Kathleen Stock: 'Material Girls presents a timely and opinionated critique of the culturally influential theory that we each have an inner feeling about our sex called a gender identity, and that this feeling is more socially significant than our actual biological sex. It makes a clear and humane feminist case for retaining the ability to discuss material reality about biological sex in a range of important contexts, including female-only spaces and resources, healthcare, epidemiology, political organization and data collection.'
- The War on the West by Douglas Murray: 'It is now in vogue to celebrate non-Western cultures and disparage Western ones. Some of this is a much-needed reckoning, but much of it fatally undermines the very things that created the greatest, most humane civilization in the world.'
Yikes on bikes.
Mrs. Perry starts off with identifying her ideological antagonists in the first chapter of The Case Against the Sexual Revolution:
'I’m using the term "liberal feminism" to describe a form of feminism that is usually not described as such by its proponents, who nowadays are more likely to call themselves "intersectional feminists".'
She gets her digs in at Simone de Beauvoir, often quoted for her contention that 'One is not born, but rather becomes, a woman', and Emma Watson, who incurred the ire of bioessentialists by showing support for trans women in response to Rowling's repeated attacks and anti-trans essay.
'Liberal feminism takes this market-orientated ideology and applies it to issues specific to women. For instance, when the actress and campaigner Emma Watson was criticised in 2017 for showing her breasts on the cover of Vanity Fair, she hit back with a well-worn liberal feminist phrase: ‘feminism is about giving women choice ... It’s about freedom.’ For liberal feminists such as Watson, that might mean the freedom to wear revealing clothes (and sell lots of magazines in the process), or the freedom to sell sex, or make or consume porn, or pursue whatever career you like, just like the boys.'
In chapter two, she attempts to use the hard science of biology to justify unfounded claims about human psychology, conflating the fields of sociobiology and evolutionary biology. Rape is a social construct, not a biological one, even if there does exist a biological drive to seek sexual pleasure; therefore an explanation for the human behaviour of rape is a question for the realm of evolutionary psychology or better yet, if you care about things like testable hypotheses, sociology.
Under the heading 'Rape as adaptation', she cites the text A Natural History of Rape, basically The Bell Curve of evolutionary psychology's contribution to discourse on rape, to support her interpolation of the 'obvious possibility: that rape is an aggressive expression of sexual desire', a premise which has, in the decades since the book's publication, curiously not yet not made itself obvious to most social scientists. 🤔
'Their analysis of rape then forms the basis of a protracted sales pitch for evolutionary psychology, the latest incarnation of sociobiology: not only do the authors believe that this should be the explanatory model of choice in the human behavioural sciences, but they also want to see its insights incorporated into social policy.'
This contention, that rape is motivated more by the desire for sexual release than the desire to exercise social power, is important to making her case that men on the whole have near uncontrollable sexual urges that should be curbed by instituting stronger societal pressures to form lasting, monogamous marriages.
But to get to the bottom of this not-actually-very-controversial issue in social science, why not just ask rapists themselves why they rape?
'In a series of three studies, the authors examined whether the relationship between RMA [rape myth acceptance] and self-reported rape proclivity was mediated by anticipated sexual arousal or anticipated enjoyment of sexually dominating the rape victim. Results of all three studies suggest that the anticipated enjoyment of sexual dominance mediates the relationship between RMA and rape proclivity, whereas anticipated sexual arousal does not. These findings are stated to be consistent with the feminist argument that rape and sexual violence may be motivated by men’s desire to exert power over women; that RMA is a cross-culturally reliable and valid construct; the probability that males who report a high tendency to commit a sexual assault are more likely to rape a woman once they get an opportunity to do so; and suggest that the incidence and prevalence of rape and sexual violence could be decreased by educational interventions that minimize men’s tendency to associate sex with power.'
'At the heart of this resocialisation project is a fundamentally utopian idea: if the differences we see between the sexes are entirely socialised, then they must also be entirely curable through cultural reform, which means that, if all of us, right now, could accept the feminist truth and start raising our children differently, then within a generation we could remake the world.'
What?? Nice strawman cum non sequitur. Literally nobody who understands gender socialisation is saying that. Although it would be neat if things did work that way.
She refers to herself as a 'gender critical' feminist while grasping at straws to malign everyone she can think of who has contributed to the development of critical theory.
'In 1977, a petition to the French parliament calling for the decriminalisation of sex between adults and children was signed by a long list of famous intellectuals, including Jean-Paul Sartre, Jacques Derrida, Louis Althusser, Roland Barthes, Simone de Beauvoir, Gilles Deleuze, Félix Guattari and – that esteemed radical and father of Queer Theory – Michel Foucault.'
Their crime? Advocating for age of consent laws in France to apply equally to homosexual and heterosexual relationships. The age of consent at the time was 15 for heterosexual practices, which granted is really young to be having sex with 40-year-olds, but 21 for homosexual practices. Why the discrepancy? Apparently asking that question means you're a pedophile. And the fact that queer theory employs critical thought and deconstruction is clearly evidence of a queer, trans agenda to normalise pedophilia. Right? /sarcasm
So what does this book propose as the solution to rape culture the biologically hardwired tendency of males to rape?
'But while the monogamous marriage model may be relatively unusual, it is also spectacularly successful. When monogamy is imposed on a society, it tends to become richer. It has lower rates of both child abuse and domestic violence, since conflict between co-wives tends to generate both. Birth rates and crime rates both fall, which encourages economic development, and wealthy men, denied the opportunity to devote their resources to acquiring more wives, instead invest elsewhere: in property, businesses, employees, and other productive endeavours.'
That is a direct quote from the final chapter. I kid you not.
She laments the availability of abortion and contraceptive methods – 'When motherhood became a biological choice for women, fatherhood became a social choice for men' – and the legal recourse to divorce for couples that are unhappy in their marriages, because it harms the children, apparently more so than being forced to live in a loveless family.
The weird thing about her criticisms of 'liberal feminism' is that, throughout the book, she illuminates social problems that are common leftist critiques of capitalism, but repeatedly frames them as being caused by a cultural divestment from religious conservative values such as monogamous, heterosexual marriage, modesty and purity culture, and a return to traditional gender roles: 'There was a wisdom to the traditional model in which the father was primarily responsible for earning money while the mother was primarily responsible for caring for children at home. Such a model allows mothers and children to be physically together and at the same time financially supported.' Which is great if that's what you want in life. Not so great when you're being socially pressured into it.
She mentions communism only once in a quote characterising it as a totalitarian, statist ideology, then concludes, 'We have to look at social structures that have already proven to be successful in the past and compare them against one another, rather than against some imagined alternative that has never existed and is never likely to exist.' Louise Perry is not a radical, Marxist or a critical, intersectional feminist. She's not really any kind of feminist, defends liberalism (in the traditional, Lockean sense) while decrying liberal feminism, and doesn't even seem to believe in the possibility of social equality between the two immutable, essential genders. This text literally just repackages fascist family values as 'feminism' and targets vulnerable women who aren't familiar with this particular brand of rhetoric for ideological recruitment and right-wing radicalisation.
I thought this was a pretty good book. The author talks about “chronological snobbery” to mean the attitude that the past is out of date and therefore discredited (E.P. Thompson called it “the enormous condescension of posterity”). She also maintains that female and male sexuality are intrinsically not the same, which has always been obvious to me, though there are lately schools of thought that deny it. I grew up in the north of England at a time when the tradition was for young men and women to find a partner they could get along with and then go through a few stages of courtship, save up for a deposit on a house, then get married in the hope that the marriage would last a lifetime. I left there when I was 16 and didn’t have that kind of marriage, but a couple of my friends who did stayed married all their lives and apparently were happy that way. Aside from such happiness that they found in the marriage itself, it freed their energies to be productive in other ways and to build comfortable homes.
As far as “chronological snobbery” goes, I often try and understand and imagine what it must have been like to live in the age of Jane Austen or in the 19th century of the Schumanns and Brahms. Brahms was in love all his life with a woman he couldn’t have and yet nobody wrote better love songs or had deeper friendships with both men and women than Brahms. Are we missing something? Stephan Zweig writes in “The World of Yesterday” about this, he says when he sees young couples who have become lovers he thinks about Vienna in his youth and the terrible ordeals the kids had to go through if infected with syphillis and he definitely prefers the modern way, but he was writing in the 1940s long before the rise of Tinder and the modern omnicopulant freelance gonadista.
The writer ends with a recommendation to get married. I think that’s sound advice, even though I never quite managed it. The world of casual sexual encounters is becoming quite terrifying, not just for women, but also for men who can easily lose their reputation and profession if they do something that stranger lying next to them in bed wasn’t quite expecting. Maybe the greatest invention of mankind was not the wheel, but the wedding ring?
This book is not free of flaws, but I’m galvanized by its opposition to the cult mantra-spouting zeitgeist. Over the years, I have been coming to terms with the latent (and today, manifest) hypocrisy of liberal feminism. I like the description “choice feminism” better, as it tends to sidestep the gut reactions of the types of people who are in an unhappy marriage with the simpleton ideology of “liberal = good.” I say, with a whisper and a look over my shoulder, that maybe it is not always good.
The first inklings I started to get that not all was right in the world of liberal feminism was my inability to find unity with two mantras: “men are trash” and “women are liberated when we act like men.” Why, instead of encouraging women to act like the sex we are told is “trash,” do we not encourage men to act like women? Why does women’s behavior make us weak, and why is it something we must change and adapt to meet with men’s behavior? It reminds me that there are women in this world who believe it is progressive to refer to ourselves as “non-men,” defining our material reality not as femaleness, but a lack of maleness.
Were women liberated by the sexual revolution? Louise Perry writes that the adulation of “progressive” men in the 1970s for women’s lib, abortion rights, and, in particular, the rise of The Pill was a self-serving, false lionization. Here were men like Hugh Hefner, who had nothing to lose and everything to gain for the rise of his empire by the pseudo-extermination of the fear of pregnancy, which Perry describes as “one of the last remaining reasons for women saying ‘no.’” Perry is not suggesting that we regress back to the Dark Ages where women lived in fear of pregnancy; she is saying that we ought to be mindful when our wins are convenient to the promulgation of men's desires (women as orifices, on demand). Women are clearly still suffering, in spite of pharmaceutical and technological advances. The liberal feminist reaction has been to say that this is because the sexual revolution is not finished. “Thus they prescribe more freedom and are continually surprised when their prescription doesn't cure the disease.”
Over the past few years, I have been distressed over the frankly insane amount of wisdom we attribute to young people. Never before has Western society been so deferential to these all-knowing saviors, who will obviously cure every ill that has existed since the dawn of humankind, in spite of adulthood and independence being entered into at later and later ages. Perry discusses C.S. Lewis’ concept of “chronological snobbery,” or the idea that the intellectual climate of our own time is uncritically accepted, and “whatever has gone out of date is on that count discredited.” This is such an important topic that I am always trying to shoehorn into the (now rarer and rarer due to people's heightened sensitivities to opposition) political conversations I have, because we are making legislation and policy today on the sole directives of youth culture’s prerogatives, seemingly failing to consider how wrong we were - and know we were - as children ourselves. Perry concurs: “The fetishisation of youth in our culture has given us the false idea that it is young people who are best placed to provide moral guidance to their elders, despite their obvious lack of experience.” But the point is that this chronological snobbery is one of the key components in the rise of the “fuck like a man” mantra. In the early 2000s, when I came of age, adults wouldn't have dreamed of deferring to my uninformed hot takes. Now? “Well, I certainly wouldn't have dreamed of sleeping with multiple men over the holidays in the 1980s,” thinks the older woman. “But my brilliant, wise twenty-year-old daughter says it's liberating, and after all, I'm just old.” No one wants to be Karen. Perry pushes against MLK's famous “arc of justice” vision, writing that “the ‘progress’ narrative disguises the challenge of interconnectedness by presenting history as a simple upward trajectory, with all of us becoming steadily more free as old-fashioned restrictions are surmounted.”
Perry moves into what are probably the most controversial takes of the book when she discusses how “consent culture” (my coinage) has failed. Firstly, that things like “consent workshops” do not prevent rape, because rape is not, as I used to parrot as a self-righteous libfem in the early 2010s, about power. It is about sex. And if we really want to stop women from getting hurt by men, we must stop pretending like educating would-be rapists about consent will stop rape. It is all nice and good to say that men should just stop raping, and it’s hard to combat that line because it is true. Speaking about intoxicated women in a nightclub, Perry writes, “Is it appalling for a person to even contemplate assaulting these women? Yes. Does that moral statement provide any protection to these women whatsoever? No.” She's right, and she decries the collateral damage of women in service to a dogma that says encouraging women to protect ourselves is victim-blaming. Rape is not a philosophical exercise. It is a material reality that women and men experience every single hour of every single day.
And secondly, that “consent” to any given action does not make it moral or good, her thesis to a chapter on the ethics of BDSM. Perry points out that, contrary to the popular line that BDSM subverts gender roles, the majority of its advocates fall into the expected lines. In a survey, most women identified with masochism, while most men identified with sadism.
In the fourth chapter titled “Loveless Sex Is Not Empowering,” we come to the crux of Perry's argument: “hookup culture” is not empowering, and is, in fact, disempowering. To take this personally, I know that what Perry says is true because I lived it. Agonizing over casual sex with men we “caught feelings” for made up entire drinking session conversations with my friends. We had years-long pseudo-relationships with men who had no idea we would have provided the getaway cars for their armed robberies on request. It was never suggested that we should fess up, not once. But Perry has an uncomfortable, unbelievably unpopular explanation for this, and one I would have denied in my youth and now understand to be undeniable as I age: men and women are different. She uses the now unfashionable field of evolutionary psychology to evidence her claims: “A society that prioritizes the high sociosexual [defined as a person's interest in sexual variety and adventure] is necessarily one that prioritises the desires of men, given the natural distribution of this trait, and those men that need to call on other people - mostly young women - to satisfy their desires.”
These are hard sells to today’s handmaidens of capitalism (I have recurring nightmares featuring the amorphous female blobs of today's commercial art), liberal feminists. There will always be women who claim they enjoy things others tell them are bad for them, and right behind them will be men cheering on their identification as the cool girl who doesn’t show any emotional needs of her own, happy to be just a hookup. Perry notes that prototypical cool girls Carrie Bradshaw of Sex and the City and Stella Gibson of The Fall “have loveless, brusque sex with men they don't like... the pursuit of the encounter... is psychological gratification.” Perry notes that both characters claim to have “fucked like a man,” but she believes this is “purely reactive,” as liberal feminists have a laser-focus on “advising women to work on overcoming their perfectly normal and healthy preference for intimacy and commitment in sexual relationships.” This is satisfying to hear; I wish I had understood that “catching feelings” for men who paid me sexual attention was normal. It is not a deficiency, nor something to be metaphorically beat out of women. “Demisexuality” is the pathologization of average female desire.
Perry moves back to the failure of consent culture in a chapter on pornography. She notes, and I agree, that the libfem appeal to consent “cannot account for the ways in which the sexuality of impressionable young people can be warped by porn or other forms of cultural influence.” I am reminded of how women consistently talk about the emotional tolls of keeping up with the social media Joneses, while simultaneously brushing away the idea that pornography being a click away may have the same sort of power over men.
There are two things I take umbrage with in the book: the first is that Perry is incredulous that a woman can be turned on by the ubiquitous “dick pic.” Without oversharing, I have indeed been turned on by these photos sent to me (consensually) by men. “I know of no women who would masturbate” to these images, she writes. I would argue: yes, you do. But I can understand why the libfem preoccupation with the “unsolicited dick pic” would have her believing this can't possibly be true. I also disagree with her later advice that women should not have sex with men if they would not make good fathers; she states that this means they aren't “worthy of your trust.” There are plenty of men who would be bad fathers, but good partners. A lack of parental instinct does not make one a bad romantic partner. Perry should have outlined the traits of this “good father” and applied them as such, not relied on an unformed feeling women are supposed to have that men would be good fathers.
Perry might lose readers in her final chapter, which extolls the twin virtues of motherhood and marriage (she never once says that either is necessary for an individual woman’s liberation). But I must admit that I identify with her conclusions. Hookup culture took up a large part of my romantic (unromantic) life, and my friends who did not partake, anecdotally, seem to be happier, healthier, and more grounded. I am now a married woman myself, and I am very happy to be.
Here’s some advice for your daughter and mine as they reach sexual maturity:
• Hold off from having sex with a new boyfriend for several months. • Don’t use dating apps. • Monogamous marriage is by far the most stable foundation on which to build a family. • Run a mile from any man who is turned on by violence.
These are just some of the conclusions reached by Louise Perry in her disturbing, riveting book The Case Against the Sexual Revolution (A New Guide to Sex in the 21st Century).
A journalist who writes for both the right-wing Daily Mail and the left-wing New Statesman, and a former rape crisis centre worker, Perry is one of several fresh voices (Nina Power and Mary Harrington being other examples), who brave the hostility of their progressive sisters by acknowledging that men and women are different, that the differences matter, and that unbridled sexual liberation might not be the road to utopia.
Superficially, women are more liberated than their forbears, but are they happier? As you might have noticed, mounting evidence suggests not. Perry blames the supposedly liberating forces unleashed by second wave feminism. The book is counter-cultural; its punchy chapter titles have such an old-school flavour that they appear bracingly cutting-edge: 1. Sex Must Be Taken Seriously 2. Men and Women Are Different 3. Some Desires Are Bad 4. Loveless Sex Is Not Empowering 5. Consent Is Not Enough 6. Violence Is Not Love 7. People Are Not Products 8. Marriage Is Good Conclusion: Listen to Your Mother
Perry argues that the liberation of sexual behaviour that came about through the contraceptive pill, legal abortion and the Divorce Reform Act has benefitted men (though only high-status ones, and only superficially) by providing greater opportunities for no-strings sex. But there have been negative, unintended consequences. As Perry’s grandmother puts it: ‘Women have been conned’. While the pill has ostensibly liberated women, it has led to more single mothers. The pill is apparently not as reliable as is widely assumed (100 women taking it will get pregnant in a year.) But with legal abortion as a back-up, it has killed off the shotgun wedding.
“When motherhood became a biological choice for women, fatherhood became a social choice for men.”
Nor does the state adequately compensate. It doesn’t supply the love and emotional support of a father, and in many cases doesn’t even supply the cash. In the UK, less than two thirds of non-resident parents (most of them fathers) are paying child maintenance in full. (Perry doesn’t mention an additional, systemic flaw in the Child Maintenance Service, which is that it provides a financial incentive for mothers to reduce the number of nights a child stays with their father to zero.)
The sexual revolution has attempted to sever sex from emotion, a process of ‘sexual disenchantment’. But this appears to be impossible for women in particular. Forget Sex and the City – the evidence shows that casual sex makes women miserable. Despite attempts by some magazines to advise women on how not to ‘catch feelings’ (Don’t look him in the eye! Take methamphetamines! Think about someone else!), it seems that sex is a deeply emotional experience after all. Most of us know this instinctively, but there are profound cultural pressures that want you to think otherwise.
One of these comes from pornography, which emerges as the book’s chief villain. The explosion of online porn is rendering men incapable of the real thing. Erectile dysfunction now affects between 14 and 35 per cent of young men, compared to two or three per cent at the start of the century. Men appear to be catching some dangerous kinks from porn, which young women feel pressured to indulge, hence the increase in injuries and deaths from choking during sex.
And while a minority of women might make a morally dubious living from OnlyFans, the vast majority of those who attempt it end up with a measly handful of followers for hours of wasted effort, having jeopardised any future long-term relationship with a man. Despite our supposedly relaxed sexual mores, apparently men don’t tend to marry former sex workers. Who knew?
It might be some consolation if the proliferation of porn, the erosion of marriage and the ubiquity of dating apps meant we’re having more fun, but no. Not only are we staring down the barrel of an economic recession, we’ve been living through a sex recession for years.
“Put simply, the porn generation are having less sex, and the sex they are having is also worse: less intimate, less satisfying and less meaningful”.
And more dangerous. As well as the more violent elements of porn culture creeping into the bedroom, the liberal feminist notion that sex differences are trivial has also put women in danger. It’s easy to forget, if you work in an office, how much men and women differ physically. Perry puts it bluntly: “Almost all men can kill almost all women with their bare hands, but not vice versa. And that matters.”
(I’ve seen disturbing evidence of the failure to grasp this fact. Teaching at a school in a deprived part of the country not long ago, I had to intervene to prevent male pupils physically assaulting girls – slapping their faces, grabbing them by the neck. What shocked me was that the girls complained when these boys were punished for their violent acts. Is violent attention from a boy really better than no attention? Or, raised on a diet of kick-ass superheroines and fatuous bromides about gender equality, are today’s girls dangerously oblivious to the physical inequality of the sexes?)
You might think the solution to male violence is to teach boys to respect women, and remind them not to rape. But while Perry sees some utility in consent workshops, they are an inadequate means to tackle male violence. The problem is that while most men are not by temperament potential rapists, some are (the psychological consensus puts this at about 10 per cent), and these men don’t care what feminists have to say.
“Posters that say ‘don’t rape’ will prevent precisely zero rapes, because rape is already illegal, and would-be rapists know that. We can scream ‘don’t rape’ until we’re blue in the face, and it won’t make a blind bit of difference.”
The solution is to reduce opportunities for potential rapists. And that, sadly, means that some women might need to moderate their behaviour.
And yet the advice from popular culture is for women to act recklessly. Perry cites an astonishingly idiotic piece of advice from Dolly Alderton, responding to a letter in the Sunday Times by a woman concerned that she was drawn to misogynistic men.
Instead of advising her to give such men a wide berth, Alderton encouraged her to seek them out. “You need a kind, chill, respectful boyfriend in the streets and a filthy pervert in the sheets. They do exist. I hope you have fun finding one.”
If liberal feminists such as Alderton underestimate the dangers of the darker side of male sexuality, their second-wave forbears underestimated the protective function of marriage. In characterising marriage as a tool of patriarchal oppression, they appear to have scored a whopping own goal. It is no coincidence that the most influential of the second wavers were childless, and had little to say about motherhood. Those who came before them, such as Mary Wollstonecraft, recognised that men had a higher sex drive, and therefore a responsibility to contain themselves. ‘Votes for women, chastity for men’ is a suffragist slogan we seem to have forgotten.
“A monogamous marriage system is successful in part because it pushes men away from cad mode,” writes Perry. “A society composed of tamed men is a better society to live in, for men, for women and for children.”
It’s impossible to read Perry’s work and not wonder if more freedom in sexual matters might do us more harm than good. The author’s primary concern is the well-being of women, but the trends she identifies make me worry just as much for my son. Some thoughts on what young men can do to navigate the dating environment with dignity would have made for a more rounded book.
Older readers might be asking why we need a new book to point out what your mother could have told you. Because yet another dumb thing about modern culture is the way we dismiss the wisdom of our elders. Older women are ‘Karens’ who are told to ‘educate’ themselves. We nod along to the apocalyptic rantings of Greta Thunberg, even teaching them in schools as models of fine rhetoric. Mature women who know that biological sex is real, and say so, are pilloried as ‘Terfs’. The young dismiss the advice of their grandparents – ‘Okay, Boomer’. But how is this attitude working out for a generation beset by rising levels of anxiety, depression and self-harm?
There is now a heart-breaking tendency for young women to try to mother themselves. Perry cites a viral TikTok video by a young American woman, Abby, who pulls up images of herself as a child and asks, ‘Would I let her be a late-night, drunk second option? Would I let this happen to her?’ Abby is trying to mother herself, and the thousands of tearful, grateful replies to the video suggest many other young women want to do the same.
Perry concludes, “They’ve been denied the guidance of mothers, not because their actual mothers are unwilling to offer it but because of the matricidal impulse in liberal feminism that cuts young women off from the ‘problematic’ older generation … Feminism needs to rediscover the mother, in every sense”.
I have deducted one star because this book is a bit repetitive in places, but it’s well-written, reasoned and thought-provoking. As the author says, it should be compulsory reading for young women. I’m somewhat younger than the likes of Germaine Greer, but it depresses me that so many years after those ground-breaking early feminists, women are still willing to view their lives through the prism of what men dictate they should do/feel/think. We’ve been sold a pup, ladies! And now, trans women (mainly those who either still are, or formerly were, men - let’s not forget) are attacking our very identity. When will we gain the confidence to say “er, no, I’m not buying this”? And actually mean it? I don’t hate men. I’m happily married to one. Like Louise Perry, I acknowledge there are real differences between the sexes, and always will be. But I refuse to believe that having XY chromosomes automatically entitles a male to be my master. I hope all who read this book arrive at the same conclusion because only then will we achieve true equality … and respect for each other as human beings.
This book seems to go over topics and themes that I'm interested in, but the approach is a mix of academic with a hint of judgmental, which then taints the academic portion of the message.
I wanted to like this book but I simply could not complete it. I'm 100% finished with it but I recognize it doesn't deserve anymore of my time.
One of the last sentences that I read in this book -- that perfectly sums up this book -- was her talking about how people have found out that "having sex like a man" actually meant "having sex like an arsehole." This is off-putting to me. Not because I'm a man, but because it is reductionist. This reads like something someone would want to hear instead of something someone needs to hear.
I feel like this is a safe book for those who are interested in this topic, but it's not something that actually pushes the topic further. This is a filler book that looks to solidify what you already know by referencing modern situations and media (like the Netflix movie Cuties) that are close to the topics being discussed. It essentially serves as an echo chamber where those who agree will continue to agree through confirmation bias, while others will bail after the surprisingly long introduction that was mostly about liberals (not their role in sex but literally what it means to be a liberal today).
It’s about time some feminists are finally realizing that the modern sexual ethic doesn’t empower women, but instead harms women while serving the interests of selfish and irresponsible men.
Perry reminds the reader the point of feminism is — or at least should be — to promote the wellbeing of women. By this metric the modern sexual ethic should be regarded as anti-feminist.
The current sexual ethic insists that as long as there is mutual consent then anything goes. Further, many feminists urge women to “have sex like a man” as some sort of badge of liberation, meaning that women should have frequent casual hook-ups pretending there are no consequences. Perry shows that this is horrible advice for women.
Because of innate biological differences, the consequences of sex have always been asymmetrical between men and women. The availability of the pill has fooled many into believing this is no longer true— yet 8% of sexually actively women on the pill become pregnant every year.
Thanks to this hookup culture men have been taught that it is completely normal and desirable to expect sex from a partner without having any emotional attachment. Because so many women subscribe to this ethic, there is an expectation that sex will occur very quickly in a relationship (often on the first date), but women who prefer to wait longer find that their pool of available men is greatly reduced. Meanwhile, women believe that something may be wrong with them if they start to develop romantic feelings for their “friends with benefits,” and that these feelings should be suppressed.
The modern ethic also promotes the porn culture, which in addition to creating a raft of other pathologies, teaches men that women are objects to be used for sexual gratification and then discarded. It thus contributes to the “rape culture.”
The modern mindset mainstreams the BDSM movement which is primarily about men deriving sexual pleasure by inflicting violence on women.
The modern ethic decreases the likelihood of marriage, which is demonstrably beneficial for the financial well-being of women, and provides advantages to their children in practically every measurable category.
So who then benefits from this culture? Well, it’s people that rate highly on the sociosexual scale, meaning those who desire to have sex with multiple partners and without commitment. This culture enables these folks to get what they want more easily and more often. The thing is, the vast majority of these people are men.
You don’t have to be a puritanical scold to believe that these ideas spawned by the sexual revolution are harmful to women in particular and corrosive to our society in general. In fact, Perry’s arguments are made from a humanist and evolutionary framework, focusing on empirical measures of human flourishing, rather than from any religious revelation.
The Left (correctly) argues that a political libertarianism can eventually lead to a society that gives the strong the freedom to dominate the weak. Therefore regulation and reform is required to protect those who may otherwise be crushed. Yet curiously, many don’t seem to recognize that this “anything goes” sexual libertarianism similarly sets up a system where men will tend to dominate and exploit women for their own gain.
Readable and gripping - clearly deeply felt, but also rationally argued, generally nuanced and compassionate and eloquently expressed, with moments of terrifying clarity, such as "it remains true that almost all men can kill almost all women with their bare hands, but not vice versa". It's also deeply depressing, because Perry is speaking good sense in a world gone violently mad, and it's hard to see the the tide turning anytime soon to anything more positive. She is commendably direct in her call to action from the reader, but also acknowledges that the trends she's discussing are broad, culturally ingrained and shaped by technology, the economy, academia, etc etc. She wants young women to read and be convinced by her book, but believes them to be the group most set at a disadvantage by the current sexual landscape.
Part of the issue, I think, is that I struggle to see this book changing many minds. Partly that's because arguments based on the premise of individual liberty and experience and the belief that things like gendered trends in sexuality or violence are socialised are so hard to argue with without appeal to some kind of external arbitration. Perry doesn't help herself on this - I think the book leans a little too much on "self-evident" truths and and statements like "no woman I know would..." which I'm not sure are convincing to those who don't already agree with her.
But ultimately, I don't think there's anything else she can do. She ends chapter 1, "Sex should be taken seriously" with this statement: "So I am going to propose an alternative form of sexual culture - one that recognises other human beings as real people, invested with real value and dignity." She is absolutely correct: this is attractive, and a traditional view of marriage and sexuality is better for society and for women. But why? She can't say. And she can believe very strongly and correctly that "Some Desires Are Bad" and men need to control their sexual appetites for the good of women - she's right - but she has no idea how that might happen. There's a massive hole in the foundation of this book, and I hope it convinces some people to stop selling each other a damaging lie - but without the truths that underpin the principles that create marriage, that control violent and selfish human beings and lead them to love people weaker than themselves, she's not going to change the world.
On the off-chance you read no further in this review, I'll share the most important counsel here, information that should be dispensed at freshman orientation. "While there is advice within these pages that could be helpful to any reader, it is worth repeating here the points that are most relevant to these particular young women...: • Distrust any person or ideology that puts pressure on you to ignore your moral intuition. • Chivalry is actually a good thing. We all have to control our sexual desires, and men particularly so, given their greater physical strength and average higher sex drives. • Sometimes (though not always) you can readily spot sexually aggressive men. There are a handful of personality traits that are common to them: impulsivity, promiscuity, hyper-masculinity and disagreeableness. These traits in combination should put you on your guard. • A man who is aroused by violence is a man to steer well clear of, whether or not he uses the vocabulary of BDSM to excuse his behaviour. If he can maintain an erection while beating a woman, he isn’t safe to be alone with. • Consent workshops are mostly useless. The best way of reducing the incidence of rape is by reducing the opportunities for would-be rapists to offend. This can be done either by keeping convicted rapists in prison or by limiting their access to potential victims. • The category of people most likely to become victims of these men are young women aged about thirteen to twenty-five. All girls and women, but particularly those in this age category, should avoid being alone with men they don’t know or men who give them the creeps. Gut instinct is not to be ignored: it’s usually triggered by a red flag that’s well worth noticing. • Get drunk or high in private and with female friends rather than in public or in mixed company. • Don’t use dating apps. Mutual friends can vet histories and punish bad behaviour. Dating apps can’t. • Holding off on having sex with a new boyfriend for at least a few months is a good way of discovering whether or not he’s serious about you or just looking for a hook-up. • Only have sex with a man if you think he would make a good father to your children – not because you necessarily intend to have children with him, but because this is a good rule of thumb in deciding whether or not he’s worthy of your trust. • Monogamous marriage is by far the most stable and reliable foundation on which to build a family."
The author draws upon her work in a rape crisis center to share her observations. Most of the sources she cites are popular articles from mass media, not peer-reviewed journal articles or academic monographs. I was frustrated by that and the feeling that she never quite closes the deal on any of the topics: • female and male attitudes toward sex are different, but our society has permitted the male attitude to eclipse the female and privilege the male; • sexual intercourse has a special quality, but society leads us to believe it doesn't, again privileging the male; • hook-up culture, pornography and BDSM ("simply a ritualized and newly legitimized version of a toxic dynamic"--choking and strangulation have been normalized) are harmful to women and men, but more harmful to women; • marriage is good--for men, for women, and for children. • People have "real value and dignity. It's time for a sexual counter-revolution" (20).
It should be self-evident that liberal feminists "have done a terrible thing in advising inexperienced young women to seek out situations in which they are alone and drunk with horny men who are not only bigger and stronger than they are but also likely to have been raised on the kind of porn that normalizes aggression, coercion and pain" (15).
Hook-up culture is a terrible deal for women and yet has been presented by liberal feminism as a form of liberation. "A truly feminist project would demand that in the straight dating world, it should be men, not women, who adjust their sexual appetites" (79). Magazines now direct women to "emotionally cripple themselves to gratify men" and to believe "that emotionless sex was the feminist thing to do" (81).
This book has been making the media rounds as some kinds of contrarian revelation. Maybe it is that to an audience of young women; there is nothing here that provoked in me any new way of thinking about the subjects, but I used to teach Women's Studies before the gender ideology took over. It is another entry in the genre of describing and to a degree prescribing how to cope with existence in "the nihilist moment of disillusionment and anger, after people have lost faith in the old stories but before they have embraced a new one" (Harari 2018). The Pill has been around for 70 years; Homo sapiens for 200,000. "We evolved in an environment in which sex led to pregnancy" and males attempted to mate with as many females as possible. We cannot pretend that contraception has erased millennia of adaptation. Perry made no mention in this book of the impact of contraception on women's bodies. From my summary and review of This Is Your Brain on Birth Control: The Surprising Science of Women, Hormones, and the Law of Unintended Consequences: "It is astonishing that so many women daily take a medication that "influences billions of cells at once from head to toe" throughout the body, without giving thought to the significant consequences these pharmaceutical have on every aspect of their being, how they think, look and behave, "how they see the world...and just about anything else you can possibly imagine." Your likelihood to divorce may even depend on whether you met when you were taking the pill or not. Hormones are powerful chemicals and their impact is far reaching."
In a nutshell: Women and men are not the same. Men liberated "their own libidos while pretending they were liberating women" with access to abortion and contraception. The unforeseen consequences and the sexual ethic that have resulted have privileged male sexual experience and innate desire for quantity and variety as desirable. Liberal feminists have unwittingly promoted male experience as normative in so many ways by encouraging casual sex, devaluing marriage, ascribing greater value to women's public representation and work outside the domestic sphere to the detriment of motherhood and home-making, and especially by adopting male attitudes toward sexuality rather than affirming the female need to be choosier due to potential for physical harm and pregnancy. Instead, we should be seeking to "promote the wellbeing of both men and women, given that these two groups have different sets of interests, which are sometimes in tension"? (10).
There was not enough discussion of the influence of the market, which benefits when individuals are freed from all commitments.
"This ideal liberal subject can move to wherever the jobs are because she has no connection to anywhere in particular; she can do whatever labour is asked of her without any moral objection derived from faith or tradition; and, without a spouse or family to attend to, she never needs to demand rest days or a flexible schedule. And then, with the money earned from this rootless labour, she is able to buy consumables that will soothe any feelings of unhappiness, thus feeding the economic engine with maximum efficiency" (9).
And, from Deborah Spar's book, “In purely economic terms,….women are not better off giving away something they once bartered. No, women do not gain by losing the power they once had to force men to buy their favors .... a trio of leading economists [found]…the advent of abortion and contraception in the U.S. may actually have worsened the fate of women, or at least weakened their ability to bargain with men. Specifically, they demonstrate that just as women gained the power to prevent pregnancy so, to, did they lose the power to commit men to marriage in the case of an unwanted pregnancy.” And, a young woman who wants a relationship but does not want to engage in sex will be at a competitive disadvantage to her willing peers.
Perry intimates and implies, but is just too subtle (a quintessentially British quality) for my taste. As a Weberian (my doctoral thesis topic), I confess Perry had me in the palm of her hand when she mentioned Max Weber on page 11. Weber described as "disenchantment" (Entzauberung) the condition of the modern world in which rationalism has stripped the world of magic, but not humans of the longing for the transcendent that religion used to fulfill, so they attempt to access it by sensual means (sex, alcohol and other drugs, food, etc.), which does not and cannot work. Sexual disenchantment means that "sex is nothing more than a leisure activity, invested with meaning only if the participants choose to give it meaning....that sex has no intrinsic specialness, that it is not innately different from any other kind of social interaction, and that it can therefore be commodified without any trouble" (11).
Right now, "consent is the only moral principle left standing under the reign of sexual disenchantment" (68).
"And the liberal feminist appeal to consent isn't good enough. It cannot account for the ways in which the sexuality of impressionable young people can be warped by porn or other forms of cultural influence. It cannot convincingly explain why a woman who hurts herself should be understood to be mentally ill, but a woman who asks her partner to hurt her is apparently exercising her sexual agency. Above all, the liberal feminist faith in consent relies on a fundamentally false premise: that who we are in the bedroom is different from who we are outside of it" (131).
We need "A sophisticated system of sexual ethics need to demand more of people and as the stronger and hornier sex, men must demonstrate even greater restraint than women when faced with temptation." We need a return to the "Chivalrous social codes that encourage male protectiveness toward women." These "are routinely read from an egalitarian perspective as condescending and sexist. But...the cross-culturally well-documented greater male physical strength and propensity for violence makes such codes of chivalry overwhelmingly advantageous to women, and their abolition in the name of feminism deeply unwise" (69). But the media and society encourage men in particular not to resist harmful desires, but to cultivate them.
"Why do rape and molestation cause more harm, if sex has no more significance than other acts?" (See https://americancompass.org/three-the... ). Perry is convincing when she indicates the many reasons that using consent as the basis for determining harm is useless. Again, sex is quite different from other activities and women, who tend to score high on agreeableness, convince themselves that their participation in certain activities is their choice. Many of the women who denounced Harvey Weinstein consented, but later repented, feeling violated. It is not until much later that hook-up partners, porn actresses, and prostitutes realize how profoundly they were damaged by their activities. False consciousness had taken hold. At the risk of allegations of a "nanny state," a stronger argument for the legal protection of women against ostensible consent would be difficult to find.
Further, young women criticize the stereotypical woman of the 1950s for pleasing their husbands, but eagerly read articles in women's magazine about how to please a man sexually. While "sharing the inside of their bodies was expected, revealing the inconvenient fact of their fertility felt too intimate. We have smoothly transitioned from one form of feminine subservience to another, but we pretend that this one is liberation" (20).
Perry mentions various times one of the many rifts between liberal feminism (which I have elsewhere described as emphasizing superficial "personal choice" devoid of analysis of the deeper impetus for those choices and how the illusion of freedom and personal choice affects society as a whole) and radical feminism, which is far deeper, more critical and eager to examine the bigger picture. Freedom for the predator is death for the prey. Liberal feminism promotes a carefree sexual ethic, which aside from the lack of consent, actually undermines the case that rape is a singularly traumatic crime. Radical feminism looks at how and why female experience of sex differs from the male and refuses to capitulate to patriarchy, the establishment of male experience as normative.
In the second chapter, Perry refers to evolutionary psychology and biology, but in superficial ways, mainly regarding rape. A Natural History of Rape revealed that rape is not solely about violence, as the liberal feminists have asserted for decades against the obvious; it is very much about sex. "Concluding that rape must be motivated by the desire to commit acts of violence because it involves force or the threat of force is as illogical as concluding that men who pay prostitutes for sex are motivated by charity." Great line. We are primates, so it's informative to see what is normative in other primates to ascertain biological proclivities, but that's not part of Perry's argument. Astonishingly, she never refers to the bonding hormone oxytocin that increase in women during sexual stimulation, while men get bursts of the highly addictive dopamine. That is one enormous oversight.
I had high hopes for this book, but Deborah Spar's Wonder Women(2013) is a better book for readers seeking a more academic treatment.
Nevertheless, this book would be an excellent discussion starter for a First Year Experience or an Introduction to Women's Studies course. It will be useful for young women or for those who haven't been deeply involved with the topic. Those who have been in the feminist trenches for 30 or more years will only be surprised to learn how far off the rails the movement has gone.
2 stars for the deep research and thought that Perry clearly put into this work, deducted 3 stars because…well, at the end of the day, Perry comes from the same white elite background as the liberal feminists she’s critiquing, and her arguments stand on some extremely shaky ground and blind spots, leading her to a reactionary and ultimately harmful conclusion (spoiler: to get married).
Through the different interviews and sources that Perry pulled together, it’s clear that she tried to prove her point through as many means as possible (personal experience, témoignages, sociological reviews, some evolutionary bio reviews) and it makes for an engaging, compelling read. I broadly agree with her arguments that casual heterosexual sex leads to more harm to women, that prostitution is more of a moral bad than good as it disproportionately concerns women who are poor and oppressed, and that the consent framework that we have currently is not enough.
However, I found myself bristling against the core of her argument, which boils down to the fact that men and women are biologically different. I find this to be an easy excuse, another iteration of boys will be boys, so we girls just have to protect ourselves (which she literally says in the conclusion, offering her daughter the standard list of old wives’ cautionary words: don’t date anyone you wouldn’t marry, try to stay in a monogamous marriage, and don’t use dating apps). So after all of this rhetorical work, after the progress that we’ve made in gender equality that Perry even acknowledges, like the destigmatization of divorce and access to contraception, we’re supposed to just resign ourselves to a specific vision of monogamy??? And what is the role of the man in all of this—poor little Robert who can’t help but want to sow his seed, but through the pressures of conventional society will have to constrain himself to having sex with one measly woman rather than hundreds??? This is exactly the model that she criticizes, wherein the upper and middle classes stay in monogamous marriages and a small proportion of lower class women are sacrificed to soak up the extra sexual energy that men just have biologically. It’s the same move of putting the load into women to protect themselves from men’s misdeeds without addressing the real issue, Western patriarchy.
I also find the piece of advice of staying together and persevering through marriage to be incredibly insulting. Does she not think that people try to make their marriages work? And doesn’t she also realize that when people stay in unhappy marriages and still raise their kids together, those kids can still end up with as much emotional damage as children with single parents? Again, she herself points out that those who are upper-middle class have it easier because they have the resources to first enter a good marriage and then deal with the repercussions of a failed marriage. What about the rest of us, who have to deal with the repercussions of either choice nonetheless?
Of course, I’m not trying to say that we should default to the free market of casual sex that Perry takes to be the norm (which I feel like she’s over exaggerating—the AVERAGE number of sexual partners is meant to be high because it’s skewed by outliers; the majority of people aren’t going around hooking up with everyone they see—Perry clearly needs to revisit statistics class). I’m just tired of marriage being the solution when it’s clearly not and hasn’t been.
And perhaps this is a nitpicky point, but in her conclusion, she argues that societies with monogamous marriages are the most stable BECAUSE they are richer/more productive. So after all of this talk about human dignity, which has a very spiritual basis that this book is lacking, only to defer to an economic argument? It’s so well known that economic productivity only leads to happiness to a certain point—why can’t we address this?
My contribution to this discussion is to steer away from Perry’s biological argument which I find both insufficient and insulting to men and women. What if the problem is Westernization and colonization and not the biology of gender (which, again, under a lot of examination now)? After all, the institution of monogamous marriage/the nuclear family that Perry portrays to be soooo universal is extremely Western; all other forms of societies are simply cast aside as men investing their time into second wives rather than their economic production. AHHH. This is so detractive, incorrect, and Western! What about Native, Asian, African, etc. societies that relied on a communal model of child rearing that decenters the nuclear family and still isn’t polygamous? Perry is so dismissive of “gestational communism” because it detracts from the mother-child bond and skips right over its benefits because to her, it’s a zero-sum game where the mother MUST be the primary figure in a child’s life and all other relationships are flimsy and therefore unnecessary. She’s forgetting (or more likely, not aware of) communities where the grandparents, aunts, uncles, extended relatives, and friends are strong influences on the child, inherently decentering the nuclear family but not detracting from it. To be so attached to the supremacy of the mother child relationship is so selfish, which somehow Perry paints to be a good thing. Are we saying that to be a mother is to be inherently good? I can point to so many examples that this is not the case, and literally to my own life, where because the mother values herself so much, she ends up damaging the child.
To me, the core of the issue lies in the lack of love outside the self in Western societies and all other societies that the West has colonized. I don’t believe that men are inherently more sexual—I believe that they’ve been socialized to initiate violence against women and to never have the emotional intimacy that women form with each other. So then they seek love or what they think is love through sex. It’s because they’re so lacking in love that they perpetuate this violence, ignoring consent and pushing forward prostitution. And women in men’s circles can’t help but adopt this attitude too, also from a lack of love which in Western society they’ve been taught to seek primarily from a romantic relationship. We need to all learn to respect each other’s inherent value and how that affects relational acts like sex (as Perry argues)—and that can only be done through pushing forward an ethic of love, an all-encompassing selfless love. It’s time to introduce more Indigenous and non-Western thinking that centers community into this crucial debate because at the end of the day, if we keep centering one’s selfish desires, then no one wins.
An extraordinary book - filled with material that exposes the lie of the sexual revolution: "freedom is everything". Perry is not conservative, but exposes so many ways that liberalism has left society (especially women) worse off in the realm of sexuality. Her work is measured and well researched, and she is unembarrassed about being frank (and using language to match). At moments I was shocked, at other times simply grieved, to hear tragic accounts of how the cultural shifts of the past 60 years have worked out.
One of her most profound insights is gleaned from Chesterton - that you shouldn't reform things until you understand what they are for (p51). The sexual revolution has reformed sexuality without understanding its purpose, or how it really works. However, it is for this same reason that I can only give this book 4 stars; Perry's persistent attempts to explain every aspect of sexuality through an evolutionary perspective prevents her from understanding sex's real purpose. She is frequently tantalisingly close - at moments it genuinely seems like she recognises the beauty of Christian teaching (see, for example, her advocacy for marriage in the final chapter) - but she consistently falls just short. She is fixed in an anti-Christian posture that prevents her brilliant observations from finding the smooth landing that they so-nearly achieved.
A concise and refreshing indictment of the implications of the sexual revolution featuring spicy chapter headings: Sex must be taken seriously Men and women are different Some desires are bad Loveless sex is not empowering Consent is not enough Violence is not love People are not products Marriage is good
Now that political ideology has replaced traditional religion as the faith system of choice for the educated elites in Western cultures, it is not surprising that a new genre of books has appeared. In it, former communicants of the “progressive” church publish their “95 theses” which declare the church is corrupt and morally bankrupt. Like Luther, the alternative they propose turns out to be a call for a reactionary return to a more regressive, but somehow idealized and purified, past.
I first saw a summary of this book as an opinion piece in the Wall Street Journal. I decided to read the whole book because I was interested to see how Perry would develop her arguments and what her alternative approach (only hinted at in the WSJ piece) might be. Sadly, I was greatly disappointed.
First, I do think the book is worth reading and if the author hadn’t made some outrageous, unsupported claims, it might have deserved three stars. Unlike the genre-sharing “Jews Don’t Count,” this book is not a personal screed nor an attempt to gain favor with the church (in this case) mothers. Some of the main points she makes are quite valid. It is maddening that they need to be stated in the first place.
Among her points that are obviously valid if one accepts the principles of Western Enlightenment 101: humans are a product of a million years of evolution that has made male and female representatives of the species behave differently and have different drives, desires and practices regarding sexual relations. Sex is at the center of both our biological and cultural activity and so needs to be taken seriously. Humans need to be treated with respect and dignity, and so sexual practices that demean people should be condemned and discouraged. Consent provides no protection from demeaning behavior. Violence causes suffering and so is always wrong behavior.
Of course, the “progressive” church (like some variants of the “right wing” alternative) reject and attack the ideas of the Western Enlightenment (in the “progressive” case mostly on the grounds that they are a creation of “white, straight, colonialist men”). So the fact that a former self-described disciple of the feminist denomination uses the weapons of the Enlightenment to attack this church, is somewhat amusing and a bit refreshing.
Where the book disappoints is that after having the epiphany that the feminist church is corrupt, Perry’s moral courage and imagination fails her. The only alternative she sees is a reactionary return to “traditional” values, despite her own acknowledgement of the harm and suffering these pre-modern ideas caused. It is not surprising that many of the positive reviews here are written by people who are members of traditionalists communities, who are adept at ignoring the violence and suffering being perpetrated in their own communities in the name of traditional sexual mores.
More disappointing, is Perry’s resort to propaganda instead of reasoned argument, to support both her criticism and call for reaction. All the rhetorical tricks are here - straw person arguments, misuse and abuse of research and statistics, the use of weasel words that without evidence or real justification stretch “might be” into “is”, the (mis)use of anecdotal stories to support her argument when there is no good evidence etc etc etc.
Ultimately what drives her propaganda is a set of alternate beliefs Perry creates for her new church, some implicit, others explicit: because males (on average) are stronger than females, females are overwhelmingly the victims of male violence and so require special protection. The invention of the pill along with male (on average) greater desire for casual sex leads to suppression of female goals and desires, which further encourages and exacerbates violence against females. Sexual liberty inevitably leads to sexual libertarianism, where humans are instrumentalized and commodified.
I deliberately put the “on average” in parenthesis, because Perry mentions repeatedly that normal distributions tell us nothing about any individual’s behavior nor even mass behavior. Yet Perry herself ignores this and repeatedly moves from arguing because some group of males may behave in a certain way (e.g. like casual sex), males in general behave in this way. She very quickly moves down the slippery slope because it’s the only way she can support her arguments.
This review is already getting way too long so I’ll first mention an alternative worth looking at on the topic of taking sex seriously and avoiding demeaning behavior in sexual relations. Specifically, Betty Martin’s “Wheel of Consent” directly addresses, in an intelligent and comprehensive way, how to redefine consent so that it addresses human dignity and respect, without reasserting traditional sex mores as the only reasonable alternative to being instrumentalized by casual sex. Martin’s approach undermines Perry’s argument that sexual liberty inevitably leads to libertarian exploitation, while agreeing with the point that traditional ideas of consent are not sufficient, and that having agency, doesn’t guarantee we act wisely.
I”ll conclude this review with a couple of examples of Perry’s propaganda which I found particularly maddening. To make the argument that females are the overwhelming victims of male violence, Perry attempts to criticize the “feminist” idea that rape is about misogynistic assertion of power not pleasure. In fact Perry notes that it was her epiphany on this point that led to her reappraisal of the “feminist” church. I should note that, as with all complex human behaviors, this is something really hard to prove one way or another, and there is no reason to assume that it might not be both or something else as well.
As part of her criticism, she quotes statistics on rape. While noting these statistics on rape are way under reported, she quotes a statistic of 2-5% for males being raped. She then states that this seems to correlate with the number of homosexual males, thereby proving that males who love males are also doing it for pleasure.
Where to begin? We don’t know how many males are exclusively attracted to other males. When asked how many have engaged in same sex behavior the number (in some studies) is closer to 10%. Again this is almost certainly an underestimate. We know as well that in many male dominated cultures and sub-cultures, a very large number of males engage in male on male sex, and it is considered normative, even desirable (e.g. Athenian elites). From an evolutionary behavioral perspective, it is quite possibly the case that males who are purely heterosexual are on the fringe of the distribution and most males are inherently bi-sexual. We don’t and can’t really know. What we can know is that far, far more males than 3-5% have been raped and that many self-identifying heterosexual males rape other males (cf. prison sexual violence). This doesn’t disprove Perry’s argument that rape is for pleasure, but it illustrates how she plays fast and loose with statistics.
It also illustrates a more important point. Perry seems to have an enormous blind spot about violence against male bodies. In the beginning of her chapter on prostitution, she brings up the appalling fact how modern armies see prostitution as a necessary recreation for conscripts. This serves as a segue into her argument that prostitution is always violence against women and casual sex is ultimately a variant of prostitution. But why is it surprising that the army—an institution that instrumentalizes male bodies, that causes enormous numbers of deaths, injuries and psychological trauma almost exclusively to males, that justifies this violence as part of some greater good—would have no problem setting up brothels? In fact, by bringing up the army Perry is ignoring the elephant in the room: it is quite likely that male on male violence, sexual and otherwise, is far more prevalent in every society than male on female violence. Obviously this doesn’t justify any type of violence, but it does undermine her thesis that females require “special” protection because they are overwhelmingly the victims of male violence.
One last example. In her argument in support of traditional monogamous marriage Perry notes that post the pill and the rise of a culture of sexual liberty, the number of single family households has risen dramatically. She argues that with the availability of casual sex, men are less incentivized to sacrifice freedom for parental responsibility.
The statistics on this are quite interesting. First, in the US at least, single family households have been trending down again for a number of years, and now are once again below 1995 levels although still more than double the numbers in the seventies. But if sexual liberty is the dominant paradigm, one would think that a younger generation of males raised in this paradigm would be even less willing than their fathers to hang around. Moreover, the big difference between the present and 1995 is that a small but still significant increase in these single family households are led by males.
In addition the US and the UK are outliers, with 23 and 21 percent of single family households, as compared to 12% in Germany. Which raises an interesting question: Is sexual liberty the cause of sexual libertarianism as Perry contends, or is sexual libertarianism a reflection of cultures that highly value economic liberalism and individualism, such as the US and the UK? Germany and other countries where fraternity is a value are inherently more family oriented and so have less unwanted babies raised by single mothers. Keep in mind prostitution is legal in Germany, which further undermines Perry’s arguments.
I could go on and on and on. Bottom line: if you read this book, read it critically, and carefully fact check Perry’s arguments. Try to reimagine a way to balance liberty, equality and fraternity in your dealings with others and to live a life of dignity and respect of your own self and others.
Definitely 4.5 stars, maybe more. Louise Perry is a young Oxford Union debate winner on the topic of whether to welcome the era of "new porn." Porn star Jenna Jameson had won a similar debate some 20 years previously, only arguing on the other side. Perry points out that Jameson would be taking the opposite view today having long since left the industry and become an outspoken critic of its deleterious effects, particularly on women.
Perry in her book walks through the history of the sexual revolution which primarily hit its stride in the 1960s with the advent of the Pill and other contraceptive methods including abortion. She contrasts Hugh Hefner with Marilyn Monroe. The former profited enormously by exploiting the likes of the latter. Hefner ultimately purchased the plot and was buried next to Monroe many decades after death, which itself is an irony of the power imbalance between men like Hefner and women like Monroe.
This book is a brilliantly argued and written take-down of the myriad harms that the sexual revolution has wrought on women and men. It is a thoroughly secular analysis of a problem that faith groups have long recognized and sought to prevent and mitigate. Perry is a young feminist well educated in gender studies and experienced in working with domestic abuse and rape crisis victims. She recognizes and acknowledges the fact that indeed men and women differ, both biologically and psychologically. And, as a result, the normal distributions of sexual interests and proclivities vary between the sexes. This cannot be denied, although feminists have long sought to do exactly that, largely to the detriment of women.
Perry argues effectively that people are not commodities, that violence is not love, and that consent cannot alone be the rationale for permitting harms to continue. We are in the midst of seeing an ever increasing appetite for sexual violence, and a normalization of it through pornography in media. And yet people are more isolated, unhappy, and anxious than ever before. This book demonstrates how women in particular are harmed by a system that primarily benefits a relatively small sector of men with a high sociosexuality index.
Ultimately, Louise Perry argues for marriage, a return to the norms that have been undone by decades of hostility, but long served the interests of women and men alike. She does so by citing a wide ranging spectrum of feminist scholars and writers from Mary Harrington to Andrea Dworkin to Shulamith Firestone (who incidentally died sad and alone). At the same time, Perry casually observes that changing norms have rendered opposition to same-sex marriage as "cruel and nonsensical," but without much analysis and in contradiction to the points she makes about how complementarity, numericity, and permanence benefit men, women, and their biological offspring within the protection of marriage (something that is an impossibility with same-sex participants whose interests are ineluctably at odds). Nevertheless, she makes a persuasive case that the harms of sexual independence brought on by the individualism within the last century far outweigh the benefits. I think this is a must-read for anyone who is serious about women's rights.
When people ask if I'm a feminist, I can't help but to answer, “I guess so, but also not really?”
Whether you agree with her or not, Louise Perry's writing is really thought-provoking. There were many parts of this book I really enjoyed and found myself agreeing with — from her firm attitude towards porn, to open discussions about loveless/violent sex.
An unyielding feminist and as someone who has worked closely with vulnerable women, Louise Perry writes from a unique perspective arguing against the belief that modern-day ‘sexual freedom’ is truly liberating. She is right, mostly.
But I can also imagine someone countering her argument by deeming it self-serving and insufficient. She encourages readers to ‘get married’, and that (monogamous) ’marriage is good’ - while her core argument for marriage is its commitment and stability (which I do agree with), where the institution of marriage is no longer respected in society, divorce seems like an eventual outcome anyway. (I suppose I think that as individuals we shouldn’t *just* get married as a solution, but rather we should envision a culture that respects marriage more.)
I don’t know if she addressed this, but there was also an inherent belief that women would/should end up with a significant other. In one of the last chapters she brought up that the older wave of radical feminists who pushed for childlessness ended up dying alone, because feminist friends and family would eventually isolate as their ‘bonds’ were not ‘durable’ (as compared to having a partner+children to look after her). I’m not convinced this was an issue of being childless/unmarried per se - but rather, the lack of strong companionship/friendships. Surely, if I'm reading her thesis correctly, unmarried women - comme the Rich Single Aunt Trope - should also be able to live just as fulfilled lives, without men? Happy to be proven wrong on my reading though.
I actually really enjoyed this book, and I think this makes some really interesting arguments.
All my life’s experience as a 1960s feminist lead me to agree with this wise and well researched book. The sexual revolution has not been good for most women, but great for most men. Most men and most women have different sexualities, and denial of the difference harms women. Pornography harms male and female sexuality and endangers women.Monogamous marriage is the best option for bearing and raising children.
Removed one star for repetitiousness. Otherwise very good. Just reading the chapter titles is depressing. How sick are we as a society that we need someone to argue that “Some Desires Are Bad,” “Violence Is Not Love,” “Marriage Is Good,” etc?
It’ll be tempting for intellectually serious religious people to say “I told you so” here, but let’s just be glad if the society-wide backlash against the sexual revolution finally arrives, helping solve the collective action problem (we need to go “Lysistrata” to some extent, but that’s impossible right now), and hopefully producing a more just and flourishing culture.
Some 🔥passages:
“Women are still expected to please men and to make it look effortless. But while the 1950s ‘angel of the house’ hid her apron, the modern ‘angel of the bedroom’ hides her pubic hair. This waxed and willing swan glides across the water, concealing the fact that beneath the surface she is furiously working to maintain her image of perfection. She pretends to orgasm, pretends to like anal sex, and pretends not to mind when her ‘friends with benefits’ arrangement causes her pain. I’ve spoken to women who suffered from vaginismus for years without telling their partners that being penetrated was excruciating. I’ve also spoken to women who have had abortions after hook-ups and never told the men who impregnated them because, while sharing the inside of their bodies was expected, revealing the inconvenient fact of their fertility felt too intimate.”
“Studies consistently find the same thing: following hook-ups, women are more likely than men to experience regret, low self-esteem and mental distress. And, most of the time, they don’t even orgasm. Female pleasure is rare during casual sex. Men in casual relationships are just not as good at bringing women to orgasm in comparison with men in committed relationships—in first-time hook-ups, only 10 per cent of women orgasm, compared to 68 per cent of women in long-term relationships. . . . One typical study found that 30 per cent of women experience pain during vaginal sex, that 72 per cent experience pain during anal sex, and that ‘large proportions’ do not voice this discomfort to their partners. These figures don’t suggest a generation of women revelling in sexual liberation—instead, a lot of women seem to be having unpleasant, crappy sex out of a sense of obligation.”
“The liberal feminist narrative of sexual empowerment is popular for a reason: . . . Adopting such a self-image can be protective, making it easier to endure what is often, in fact, a rather miserable experience.”
“Research conducted by ComRes in 2019 found that over half of eighteen-to-24-year-old UK women reported having been strangled by their partners during sex, compared with 23 per cent of women in the oldest age group surveyed, aged thirty-five to thirty-nine. Many of these respondents reported that this experience had been unwanted and frightening, but others reported that they had consented to it, or even invited it. And here lies the complication, because you don’t have to look hard to find women who say they love being strangled, and these willing women—girls, really, many of them—are held up as mascots by those who defend the fashion for sexual strangulation. . . . With consent, anything goes. . . . Dr. Helen Bichard rejects on medical grounds the idea that strangulation can ever be done safely, describing this as an urban myth: ‘I cannot see a way of safely holding a neck so that you wouldn’t be pressing on any fragile structures.’ And, given the possible consequences of strangulation, until recently only partially understood, Bichard argues that the vast majority of laypeople are not capable of giving informed consent to it.”
“Many of the women who seek out strangulation have a very particular—and very misguided—understanding of what strangulation means when men do it to them during sex. . . . They think strangulation indicates a man’s love, passion and desire for them. More often than not, it indicates none of these things, but, in a culture in which the differences between male and female sexuality are routinely denied, particularly by liberal feminists, it shouldn’t surprise us that many of these young women take the lead from erotic fiction . . . not realising that real-life Christian Greys usually have no interest whatsoever in the well-being of the women they (to use a nasty piece of porn terminology) ‘hatefuck.’”
“I don’t know what men think we are supposed to do with their dick pics, but I know of no woman who would masturbate to an image in which the rest of the person has been cropped away, leaving only a slab of flesh ready to be laid out on the anatomist’s table.”
“Sex workers can act as sources of sex advice only if we understand sex to be a skill set that must be learned and refined across different partners with goodness a result not of intimacy but of good technique. In this framing, sex becomes something that one person does to another person, not with another person. All of the emotion is drained away, leaving the logic of the punter triumphant. We must resist that logic at all costs. If we try to pretend that sex has no special value that makes it different from other acts, then we end up in some very dark places. If sex isn’t worthy of its own moral category, then nor is sexual harassment or rape. If we accept that sex is merely a service that can be freely bought and sold, then we have no arguments left to make against the incels who want to ‘redistribute’ it.”
The most miserable thing about this book (and modern radical feminism of its ilk) is just how bleak the outlook is for straight women. At least separatism is creative, interesting,,,,what's the word,,,, oh yeah -- radical?
The thesis is that men and women differ (on average) in their desire for sexual novelty and therefore the normalisation of casual sex brought about by availability of contraception and legal abortion benefits some men at the expense of most women. Sure. But since men can adopt multiple mating strategies ('cad or dad') and women invariably can't (because evolution!) the only solution to the discrepancy in sociosexuality is monogamous marriage*. And if it isn't enough to imagine the benefits for women, then do think of the children.
It stands on a shaky foundation of evolutionary psychology - I say shaky because Perry admits that humans likely evolved to be somewhat polygynous but she still states that monogamy is the ideal mating strategy because "successful" societies employ it (would have liked some more citations here). Many of the citations cover only partial claims (and she frequently cites opinion columns, blogs etc). Evolutionary psychologists are called evolutionary biologists, presumably to lend credibility since the outputs of the field of evolutionary psychology are widely criticised by evolutionary biologists. She does not thoughtfully or genuinely engage with opposing viewpoints on sex differences in humans or animals, trans athletes or the psychology and sociology of human sexuality. This is of course not the goal, but since she is praised for her academic rigor it's worth pointing out that this effort is not at all academic!
I agreed with parts of the analysis, as I find myself agreeing with a number of radical feminist arguments! She opposes herself to liberal feminism and argues thoughtfully that consent is an inadequate framework to understand sexual ethics. The question of whether women can truly consent to X under patriarchy is interesting and worth discussing, obviously! But the book then veers rightward into transphobia, promotion of the nuclear family and lamentation of no-fault divorce, and hand-wringing about porn, bdsm and the sex lives of young people. I sympathise with Perry's viewpoint as an activist working against 'rough sex defences', but ultimately her diagnosis does not match my own experience as a woman growing up post-sexual revolution or of women I know. Could we all be wrong?
Can we not imagine a better world for ourselves, with systems of care outside of the nuclear family? In later chapters Perry even warns anti-natal feminists that they will die alone if they don't marry and have children (!!) through the story of Shulamith Firestone whose body was sadly left undiscovered for days after her death. Forgive me if I think women (and all people) should be cared for and supported without being coerced into marriage or child-rearing. Can't we imagine a world where people can experience intimacy and family however they like!!! Can't we!!!
*Aside: she dismisses ethical non-monogamy or polyamory with a quick "I looked on reddit and all of the polyamorous people are ripped apart by sexual jealousy or deluded! so that's that!"
An outstanding read - compelling, challenging, and radical in its analysis. Perry effectively challenges the "progressive" narrative of the Sexual Revolution - noting that while many good things have been gained for women and men, there is a dark side to the Sexual Revolution that serves capitalism, commodifies desire and intimacy, and harms human relationships, society, and the human spirit - socially and individually. Perry's analysis is refreshing as it is not rooted in religious hang-ups, or prudishness, but rather in a genuine and sobering analysis of the excesses of the unchecked progressive narrative of the Sexual Revolution. She effectively critiques the prevailing attitudes and counters the argument that "only consent matters." Perry's book is ultimately a sobering indictment of our pornified society which has, it seems, taken the same disenchantment of nature that it took to the world following the Enlightenment and applied it to the disenchantment of sex following the Sexual Revolution. That is to say that while the Enlightenment and the Sexual Revolution both brought about beneficial changes for individuals and society, they also carried with them excesses and ideas which caused more harm than good. Perry's call is to a corrective and transcendent opt-out of the sexual mores of the dopamine hijacked and disenchanted herd - and a call, not to a return to tradition, but to a transcendent new morality based fundamentally in the primacy of human dignity.
Meh. Closed minded. Judgmental. Homophobic. Transphobic. Kink shaming. Slut shaming. She seems to think that just because something isn’t for her, that it’s not for anybody. To her sex should only be for passionate lovemaking between married individuals solely to make a baby but thinks men can’t control themselves enough to do that. There is nothing wrong with sex just because. This isn’t the 50s and women enjoying sex is not the scandal it once was.
About to be a long one (ha). This book wasn’t written from a religious or even conservative/Republican point of view, but instead from a woman who worked at a rape crisis center and gave consent workshops. The beginning was a little technical and outlined different political things but the book got so so good, while also being so heartbreaking. She basically makes the point that as our culture has tried to make culture more open and free for women, it has in turn hurt them worse. Many times I gasped or felt so angry/fired up about certain issues.
“I propose a different solution, based on a fundamental feminist claim: unwanted sex is worse than sexual frustration. I’m not willing to accept a sexual culture that puts pressure on people low in sociosexuality (overwhelmingly women) to meet the sexual demands of those high in sociosexuality (overwhelmingly men), particularly when sex carries so many more risks for women, in terms of violence and pregnancy. Hook-up culture is a terrible deal for women and yet has been presented by liberal feminism as a form of liberation. A truly feminist project would demand that, in the straight dating world, it should be men, not women, who adjust their sexual appetites”
“One of the most important differences between the sexes is that men are higher in the quality that psychologists call ‘sociosexuality’ – the desire for sexual variety. This means that, on average, men are much more likely than women to desire casual sex. This sexuality gap produces a mismatch between male and female desire at the population level. There are a lot more straight men than there are straight women looking for casual sex, meaning that many of these men are left frustrated by the lack of willing casual partners. As we have seen, in the post-sexual revolution era, the solution to this mismatch has often been to encourage women (ideally young, attractive ones) to overcome their reticence and have sex ‘like a man’, imitating male sexuality en masse. The thesis of this book is that this solution has been falsely presented as a form of sexual liberation for women, when in fact it is nothing of the sort, since it serves male, not female, interests. But one of the points I have been keen to stress throughout is that, although our current sexual culture has significant problems, this does not mean that the sexual cultures of the past were idyllic. All societies must find some kind of solution to the sexuality gap, and those solutions can be anti-woman in many diverse ways.”
Chapter 5 was specifically on porn and how it has shaped society and the effect it’s had on the sexual culture for men and women. She specifically writes about the abusive and domineering nature porn has taken. The author points out how the leftist feminists have been silent on this issue, but instead choose to support it. A few quotes that were powerful:
“In fact, the most committed defences of porn come nowadays from self-described ‘sex-positive’ leftists who claim that any criticism of the industry must necessarily be a criticism of its workers (funnily enough, they do not make the same defence of industries that rely on sweatshop labour). These apologists are aided, in part, by the efforts of the industry to sanitise its practices. Pornhub, for instance, runs a smoke and mirrors exercise it calls ‘Pornhub Cares’, with campaigns against plastic pollution and the destruction of bee and giant panda habitats (‘Pornhub is calling on our community to help get pandas in the mood. We’re making panda style porn!’) But a far more effective counter to any criticism of the industry is the sexual liberation narrative, always available to comfort any porn user who feels a squirm of discomfort at what they’re funding. Kacey Jordan, Jenna Jameson, Vanessa Belmond and Linda Lovelace all gave some version of this narrative at the height of their fame, responding to anyone who asked with a dismissive ‘of course I’m consenting.’ All of these women later changed their minds, after the porn industry had had its fill of them, and after the damage to their bodies and psyches had already been done. Taking a woman at her word when she says ‘of course I’m consenting’ is appealing because it’s easy. It doesn’t require us to look too closely at the reality of the porn industry or to think too deeply about the extent to which we are all – whether as a consequence of youth, or trauma, or credulousness, or some murky combination of all three – capable of hurting or even destroying ourselves. You can do terrible and lasting harm to a ‘consenting adult’ who is begging you for more”
“The porn industry would not produce content depicting abuse unless there were a demand for it. There is a darkness within human sexuality – mostly, but not exclusively, within men – that might once have been kept within a fantasist’s skull, but which porn now makes visible for all the world to see. The industry takes this cruel, quiet seed and makes it grow”
“But we all know that in the real world that doesn’t quite work. If we recoil from Norfolk’s account of fifty men queuing up to sexually violate a teenage girl who had been abandoned by the state services tasked with protecting her, how can we then watch video of a young woman only a few years older, looking just as much like a child, being violated by even more men, without a similar response? The sore, torn orifices are the same. The exhaustion and disorientation are the same. The men aroused by using and discarding a young woman presented to them as a ‘teen’ are also much the same.”
In this book, Perry assembles a host of data--everything from scientific studies to TikTok trends--and argues that the "freedom" that the sexual revolution promised women has in fact only benefited men, pushing women to stop serving men by caring for their households and instead start serving men by freely offering up their bodies. Perry is neither religious nor conservative--she's a feminist who has seen the damage that has been done to women in the name of sexual freedom and takes a very clear-eyed look at the subject, not afraid to challenge some of the sacred cows of feminism and liberalism.
Interestingly, what the book ultimately boils down to is an affirmation that a Biblical sexual ethic is the best way to support human flourishing. She doesn't get to that conclusion via the Bible, but rather by seeing the destruction wrought by the rejection of Biblical principles. She's not interested in the subject "because God said" but because she is grieved and angry by the way women have been used and abused, tricked by society into consenting to an encounter or a lifestyle that they don't truly want and that leads to their harm.
The book itself is not a fun read. It's a sobering reminder of the toxic disaster that is hook-up culture: the culture our daughters are going to have to navigate. But it is encouraging that there are some outside the conservative world who are able to set aside ideology and look clearly at the consequences of our choices, even if those consequences point to hard truths. Perry's conclusion is one that is both simple and incredibly weighty. To have the best chance at flourishing, "we need to re-erect the social guard rails that have been torn down. And, in order to do that, we have to start by stating the obvious. Sex must be taken seriously. Men and women are different. Some desires are bad. Consent is not enough. Violence is not love. Loveless sex is not empowering. People are not products. Marriage is good. And, above all, listen to your mother."
Warning: this is full of very frank discussions of all aspects of sex (including the hard-to-read ones) and there are plenty of four-letter words. I found them all appropriate to the context, but if you prefer your books more kid-friendly, this is not one for you.
The best and most interesting book I have read all year. Tore through it in just two days. A must read for anybody with questions regarding different perspectives on feminism, and potentially, a must-read full stop.
Very easy to dive in to and also to follow along with the author. Well constructed arguments. Very poignant and impactful.
I thought I was going to end this year without rating a single book five stars. This changed that, and may in fact be, in my opinion, the most important book I've ever read.
Sure, I already agreed with most of her points, but I wasn't sure *why*. They were intuitive feelings I held once as a teenager, and regained in recent years when I began questioning the liberal feminist, progressive, 'sex positive' dogma on sex that I'd been fed for years. And that's actually a point she makes- a lot of us intuitively *know* a lot of this stuff, we don't need academia to justify it. But today's society tells us we should. We should do-away with our stuffy monogamy, our 'kink shaming', our aspirations of marriage and family.
When I expressed my desire to read this book I had one person reach out to me to try and dissuade me because the author was 'problematic' (has conservative leanings). I ignored them and read it anyway, done with being told by the authoritarian left what I am and aren't allowed to engage with.
Earlier in this year, frustrated with being single and deeply lonely, ashamed of my lack of intimate experience, an older millennial cousin gave me some very bad advice that I gratefully didn't follow: have casual sex. This woman isn't necessarily a progressive or a liberal, and is deeply apolitical. How I wish I could persuade her to read this book so she would understand why I was so repulsed by that suggestion.
There's a lot that can be said about this book, things I deeply loved, but my favourite has to be the section on marriage. As a feminist, for years I'd been told marriage was bad for women, that it enabled abuse, that it was invented to control female sexuality. Deep inside, I rejected this narrative, while paying some lip service to the parts of the argument that were admittedly either hard to argue with our just demonstrably true. I now am able to put in to words *why* I reject it: because the reverse is true. If marriage is designed to curtail anyone's sexuality, it's mens, and protects women.
I need more feminists and liberals to read this book. Most of Perry's audience are antifeminist people on the political Christian right and I'd be willing to put money on a guess that this is being used against her by progressives. But I don't think that would be the case. I think plenty of feminists would agree with most or all of the ideas expressed in this book if they actually read it, but the progressive left often discourages people from reading 'problematic' 'cancelled' authors, or familiarising themselves with ideas of those they disagree with. However, if genuine feminists (and not just liberal misogynists paying lip service to feminism) gave this a chance I strongly believe it would resonate with many of them and Perry's audience would balance itself out.
This is the feminism I've been looking for. Disillusioned years ago with liberal feminism, disagreeing with enough elements of radical feminism that it wasn't the right fit, I turned to cultural feminism, and while that's still roughly where I'd place myself enough of it is still tied in with elements of radical feminism I disagree with that it still wasn't a perfect fit for me. But whatever Perry's brand of feminism is, that's me. A common-sense feminism. A feminism combining the best of traditional and progressive values. A feminism that benefits the largest amount of women, not just the rich, protected ones (liberal feminism) or childfree female separatists (radical feminism). A feminism for the issues we face as women today, with the hindsight of our past mistakes to guide us.
The book makes a lot of very important points about the pitfalls of liberal feminism and so-called “sex positivity” in the context of an enduringly patriarchal society - but the author lost me a bit with the last chapter / proposed solutions (which felt perhaps a bit too conservative and literal).
But - this is still very much worth reading, and I felt spoke to a lot of the discomfort I have felt through my 20s, encouraging a lot of important reflection at this juncture of my life (as a woman nearing 30 who has experienced many of the ups and downs of modern dating and relationships), about what I have been conditioned to accept as “normal”. I think anyone would benefit from reading this
A shallow, one-sided presentation of the cycle of violence and sexual dysfunction. There was some interesting analysis on divorce laws and its aftermaths and about the philosophical origins of the sexual revolution but most else was either shallow or rather ridiculous.
For example, the author attempts to argue that consent doesn’t really matter when it comes to sexual activities between adults, so then when a man chokes a woman during sex, it means that this man likes to “beat up” women to get hard and even though the woman consented, the man should be criminally charged because this kind of sex isn’t “good”. Furthermore apparently these kinds of “bad” fantasies are really liked only by men and women mostly like regular “good” kind of sex. It appears as though she’s desperately trying to depict women as some Victorian era manifestations of purity, who faint at mere mention of penises. She even manages to argue that the music video “Wet ass pussy” by Cardi B has nothing to do with female sexuality and somehow is a representation of male sexuality. I mean I agree that the video is pretty distasteful but can we allow some accountability on the part of women? Also the problem of violence and sexual violence is obviously a male problem, because men are violent and rapey.
This kind of biased, one-sided take on a complex problem like the cycle of violence really serves no purpose other than to divide men and women and to claim that one team is better than the other. It will take us further from any possible solution.
On the one hand, if we wish to broaden our understanding, we could look at some statistics, like for example that women start about 50% of domestic violence incidents in many Western countries or that in the 2010 CDC intimate partner sexual violence survey, there were the same number of male victims during the year 2010 than there were female victims, only that male victims were categorized not as “rape” but dismissed as “made to penetrate”. Still a long way to go to recognize male victims of sexual violence, even in the Western world.
The real taboo topic still seems to be female violence and how it relates to this cycle that we’re talking about. Has anyone ever heard any feminist talk about the violence mothers commit against their babies and toddlers?
I mean yes, men are generally more violent than women but there is one very important thing to consider. Before men are men, they are boys. When they are boys, they are parented, educated and disciplined by women. They are raised by women. And those women scream at them and hit them a lot. 80% of British mothers hit their babies before the babies are even 1 year old. An estimated 25% of pedophiles are women. According to a US study, middle class mothers hit their babies and toddlers an average of 900 times a year. The fact that endlessly bashing defenseless babies has never come up in the discourse of the cycle of violence among feminists or really anyone for that matter is absolutely ridiculous and serves as a reminder of credible these people really are.
The sexual revolution is bad primarily for women, but also for the family unit, men and society. Young women are being fooled (by feminist and popular culture) into thinking that promiscuity (but more specifically the idea of sex as meaningless) is empowering, when in fact it is unnatural, confusing and even potentially traumatising. Finally a book written by a woman addressing these issues, and I hope that the fact it is written by a woman who calls herself a feminist will mean it finds its way into more young women’s hands. It is not perfect, I share differing views on some issues, and in parts her arguments are left not quite complete … but all in all a worthwhile read. | The Case Against the Sexual Revolution: A New Guide to Sex in the 21st Century
Ditching the stuffy hang-ups and benighted sexual traditionalism of the past is an unambiguously positive thing.
The sexual revolution has liberated us to enjoy a heady mixture of erotic freedom and personal autonomy.
Right? Wrong, argues Louise Perry in her provocative new book.
Although it would be neither possible nor desirable to turn the clock back to a world of pre-60s sexual mores, she argues that the amoral libertinism and callous disenchantment of liberal feminism and our contemporary hypersexualised culture represent more loss than gain.
The main winners from a world of rough sex, hook-up culture and ubiquitous porn - where anything goes and only consent matters - are a tiny minority of high-status men, not the women forced to accommodate the excesses of male lust.
While dispensing sage advice to the generations paying the price for these excesses, she makes a passionate case for a new sexual culture built around dignity, virtue and restraint. This countercultural polemic from one of the most exciting young voices in contemporary feminism should be read by all men and women uneasy about the mindless orthodoxies of our ultraliberal era.
Author says she moved on from her “liberal” views as she matured. Great direction, and I love the message, but - influenced by the godless elitist portion of British society in which she grew up, and where I lived for many years - she doesn’t seem to understand that without God as a primary source, nothing can have real meaning, especially when it comes to our fundamental relationships.
Even her practical advice remains untethered, because she has been taught (by her friends and social environment) to reject what she calls “religion and tradition”.
Writing with great intelligence, she struggles to build the right type of framework (marriage is good, porn is bad, etc.) from scratch and she gets to so many great conclusions that, without any metaphysical context, remain suspended in mid-air. This leaves her with a shell without an egg. | no |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | no_statement | the "sexual" "revolution" of the "1960s" was not a "liberating" "moment" for "women".. "women" did not find "liberation" during the "sexual" "revolution" of the "1960s". | https://www.theguardian.com/world/2005/jul/02/gender.politicsphilosophyandsociety | The ugly side of beauty | Gender | The Guardian | "Shoes," Sheila Jeffreys says, "are almost becoming torture instruments. During a woman's daily make-up ritual, on average she will expose herself to more than 200 synthetic chemicals before she has morning coffee. Regular lipstick wearers will ingest up to four and a half kilos during their lifetime." We are talking about Jeffreys' latest book, Beauty And Misogyny: Harmful Cultural Practices In The West, and she is in full flow about the horrors of what she calls "the brutality of beauty".
Jeffreys, a revolutionary lesbian feminist, is pursuing her 30-odd-year mission to shift women out of their collective complacency. Beauty And Misogyny is her sixth book. Like the others, its central theme is an exploration of the use of sexuality by men to dominate women. Much of it is spent arguing that beauty practices - from make-up to breast implants - should be redefined as harmful cultural practices, rather than being seen as a liberating choice.
Jeffreys' introduction to feminist campaigning began in the early 70s when she joined a socialist feminist group (she was later thrown out for suggesting men were to blame for the oppression of women). Sandra McNeill, who met Jeffreys in that group, remembers her as "the Andrea Dworkin of the UK. She was, and still is, seen as an extreme, man-hating feminist". Dworkin, as it happens, lived with a man, whom in 1998 she married.
Not Jeffreys. She became a lesbian in 1973 because she felt it contradictory to give "her most precious energies to a man" when she was thoroughly committed to a women's revolution. Six years later, she went further and wrote, with others, a pamphlet entitled Love Your Enemy? The Debate Between Heterosexual Feminism And Political Lesbianism. In it, feminists who sleep with men are described as collaborating with the enemy. It caused a huge ruction in the women's movement, and is still cited as an example of early separatists "going way too far".
"We do think," it said, "that all feminists can and should be lesbians. Our definition of a political lesbian is a woman-identified woman who does not fuck men. It does not mean compulsory sexual activity with women." Although many of the more radical feminists agreed, most went wild at being told they were "counter-revolutionary".
Jeffreys' brusque manner and her seeming conviction that she is 100% right when discussing her topics of interest have led to accusations of arrogance from fans and critics alike. Although a funny and charismatic speaker, she can irritate those who feel they are being dictated to. However, she can be generous with her time, particularly with young women new to the movement.
Jeffreys sees sexuality as the basis of the oppression of women by men, in much the same way as Marx saw capitalism as the scourge of the working class. This unwavering belief has made her many enemies. Postmodern theorist Judith Halberstam once said, "If Sheila Jeffreys did not exist, Camille Paglia would have had to invent her."
In Jeffreys' latest book, she questions why the beauty industry is expanding, and why liberal feminists should see a virtue in women having the power to choose practices that a few years back were condemned as oppressive. The critique of beauty practices, written about by Dworkin in Women Hating, in 1974, has today all but disappeared, making way for procedures that "break skin and spill blood".
The history of the beauty industry is threaded through the book. Cosmetics have been used to alter appearance for thousands of years, sometimes exclusively by prostitutes and others deemed disreputable, other times as a political gesture. The suffragettes fought for the right to look and dress as they saw fit, some wearing red lipstick as a symbol of feminine defiance. After the second world war, a shortage of men meant that women tried hard to look as attractive as possible in the hope of getting a husband, and make-up became, Jeffreys argues, "a requirement that women could not escape, rather than a sign of liberation".
Born in 1948 into a working-class family from the East End of London - though her parents were based at an army camp in Munster at the time - Jeffreys, who has been teaching international gender politics at the University of Melbourne since 1991, describes herself as a product of the postwar sexual revolution. Going from an all-girls grammar school to Manchester University in the late 1960s, she expected the intellectual atmosphere of Left Bank cafes. What she found instead was young men sitting around the students' union bar making smutty jokes. She sank into a depression that lasted several years.
During this time, she started sleeping around with men, considering it her "duty" to be liberated and progressive. This was the start of her interest in the politics of the sexual revolution, which was to result in her book Anticlimax: A Feminist Perspective On The Sexual Revolution (1990), in which she argues that newly achieved and much-vaunted sexual freedom did not constitute any real gain for women, but continued their oppression in another guise.
After university, while teaching at a girls' boarding school in 1972, she read Kate Millet's Sexual Politics, a groundbreaking analysis of female oppression and patriarchy. She became "enraged" at what she learned about men's abuse and control of women.
"My rage has never gone away," she says now. "I am grateful for that." She says she distinctly remembers the moment she realised, during a conversation about politics with a man, that he was seeing her merely as a woman, and therefore inferior. "I was furious. He actually said I had the brain of a man, and while in the past I would have been flattered, a dam had burst and everything became clear." While many of her generation of radical feminists have given up fighting, Jeffreys' passion has not abated.
Her books have a common theme, whether she is writing about Victorian feminism, the sexual revolution of the 1960s, "queer" sexual politics or the history of prostitution. As Jeffreys puts it, "Male supremacy is centred on the act of sexual intercourse, justified by heterosexual practice." For her, heterosexual sex is sexual desire that eroticises power differences. Lesbian and gay sexual practices do not escape her scrutiny. Two of her books, The Lesbian Heresy (1993) and Unpacking Queer Politics (2003), focus on how "queer" sexual politics have led to oppressed sexual minorities embracing any kind of sex, such as sadomasochism, in the name of liberation.
Jeffreys tends to see things coming before they happen. She was the one who warned, in the early 1980s, that pornography and sadomasochistic sexual practices would invade the lesbian community. They did. She predicted a global trend to call for the legalisation of prostitution. There was.
The idea that radical feminism was a phenomenon of the 1970s exasperates her: "The media always look for the 'new sexy feminism' that will enable them to put young women in sexy clothing on their pages who rail against man-haters and hairy-legged dykes, and say how much they love porn. This began in the 1980s, when the 'femme cult' got under way." She's referring to the likes of American writer Katie Roiphe, author of The Morning After: Sex, Fear And Feminism On Campus (1993), who argued that feminism has made victims of women and created a culture in which men are given "mixed messages" regarding sex, resulting in unfair accusations of "date rape".
In return, Jeffreys has been treated with hostility and ridicule. Pornographers named a dildo after her - The Sheila: A Spinster's Best Friend. Sexual libertarians are infuriated by her criticism of the practices they enjoy. When she pointed out in Anticlimax the need for feminists to challenge the dominance and submission characteristic of many a heterosexual relationship, she was pretty much a lone voice, and still is. Feminist Sheila Rowbotham said in response that she had abandoned attempts at equal relationships because "equality is not sexy". Author Bea Campbell considers Jeffreys "too full of intense rage, and deeply pessimistic about both men and women. She presents a political perspective that means there is no possibility of change".
Others oppose Jeffreys' position in a more general way. Natasha Walter, author of The New Feminism (1999), for example, argues that today's women do not want their behaviour "policed by feminism", but wish to enjoy sex with men, wear make-up, and dress in short skirts and high heels without feeling they are betraying feminism.
But Jeffreys' critics can get her wrong. It is precisely because she considers that the personal is political that she presents hope of fundamental change. She believes that, ultimately, true mutuality and equality between the sexes are possible, but they are dependent on every woman resisting the status quo and critically examining her life choices. Men, she argues, would be forced to change if women did.
Jeffreys' journey that culminated in Beauty And Misogyny began when she decided, in 1973, to abandon both heterosexuality and her feminine appearance. "I gave up beauty practices, supported by the strength of thousands of heterosexual and lesbian women around me who were also rejecting them. I stopped dying my hair 'mid-golden sable' and cut it short. I stopped wearing make-up. I stopped wearing high heels and, eventually, gave up skirts. I stopped shaving my armpits and legs."
The book is one she has wanted to write for years, as "liberal feminists and postmodernists" challenge the early feminist critique of beauty practices. "Not only are the practices creeping back, they are becoming more severe and invasive of the body itself," she says.
She has taken on a tough battle: the cosmetics industry is bigger than ever (in Brazil, for example, there are more Avon ladies than members of the armed forces). And she has taken on broader targets, too. The sex industry, the misogyny of fashion, what she calls the "mutilation" of transgender surgery and the dangers of sexual libertarianism are all seen by Jeffreys as intrinsically linked to the beauty industry.
In the chapter on cosmetic surgery, she looks at the growing pressure on women to conform to models of femininity derived directly from the sex industry, such as having trimmed labia and Brazilian waxed pubic hair. "Men's desire for bigger and bigger breasts, and clothes commonly associated with prostitution, has resulted from the mass consumption of pornography."
Jeffreys can always be relied upon to back up her arguments by unearthing facts that are both disturbing and hard to believe. She cites one example of a porn actor who sold bits of her genitals to "fans" over the internet after a labiaplasty operation. She points to studies that have found significantly higher rates of suicide among women who have had breast implants. The latest, conducted in 2003 by the International Epidemiology Institute of Rockville and funded by Dow Corning Corp, a former maker of silicone gel breast implants, included a study of 2,166 women, some of whom received implants as long as 30 years ago. Dow Corning also funded an earlier Swedish study, which examined 3,521 women with implants, and found the suicide rate to be three times higher than normal.
There are other unwanted effects. Nipples can lose sensation and, in extreme cases, rot and fall off; stomach stapling can cause severe swelling in the pubic area; and liposuction can leave a patient in serious pain. A number of women have died after surgery, while others have been left in permanent discomfort.
Jeffreys argues that many male fashion designers are "projecting their misogyny on to the bodies of women", and gives examples of collections featuring images based on sexual violence - Alexander McQueen's show for his masters degree was entitled Jack The Ripper, and depicted bloodied images of Victorian prostitutes. A later show in 1995, Highland Rape, featured staggering, half-naked, brutalised models. And John Galliano, in his 2003 collection for Christian Dior, Hard Core Romance, used the imagery of sadomasochism, putting his models in seven-inch heels and rubber suits "so tight they had to use copious amounts of talcum powder to fit into them".
"One notable difference in fashion shows in the past 10 years is that the models are required to show more and more of their bodies," says Jeffreys. "Some are posed to look as though they are about to engage in fellatio. Pole dancing is now a staple of some fashion events."
For Jeffreys, the last thing women should be doing once they achieve a semblance of choice is returning to practices imposed on them during darker periods. After the US invasion of Afghanistan, for example, beauty clinics opened up all over the country, offering cosmetics as an antidote to the enforced wearing of the burka. "You'd have thought the women would have had other things to worry about," she sighs.
She likens cosmetic surgery such as labiaplasty and breast implants to female genital mutilation. She concedes the distinction that genital mutilation is carried out on children who have no choice in the matter, "but the liberal view of choice, which is that women can now 'choose' to engage in harmful, oppressive actions, does not make the practice of slicing up women's genitals to please men any less vile". As Jeffreys points out, hymen repair surgery, which is available through the public health service in the Netherlands, is sought not only by women whose cultures require them to be virgins when they marry, but also by western women whose partners wish to penetrate a tighter vagina.
Jeffreys unearthed some frightening facts - for example, a Home Office paper claiming that BSE can be transmitted through beauty products because many contain bits of dead animal. Breast implants can contain brain, fat, placenta and spleen. A link between hair dye and bladder cancer was discovered in a US study of 3,000 women who use such products regularly, and formaldehyde, found in nail polish, shampoos and hair-growth preparations, has been outlawed in Sweden and Japan, with the EU allowing its use only in small, regulated quantities.
There is much evidence that children are being targeted by the beauty industry. Kiss Products, a cosmetic retailer, has joined forces with Disney to promote lip gloss and nail polish kits through licensed animated characters. Proctor & Gamble is looking to market its Cover Girl cosmetic range to eight- to 10-year-old girls by making the use of make-up resemble game playing. "It is not only the cosmetic industry that is recruiting young customers," says Jeffreys. "It is becoming more common for young women from affluent families to be given breast implants for their 18th birthday."
Again, she blames the fashion industry. "Some designers are using 12-year-old girls in shows because their bodies are perfect to show off the type of clothing being peddled at the moment. Many men are sexually excited by this look, and the industry exploits this." Parisian designer Stella Cadente used models as young as nine in her 2001 show; it was reported that they wore "plunging necklines and high hemlines". And, Jeffreys points out, Cadente is not alone in using child models in the world of fashion.
There is little, if any, feminist critique of men's cross-dressing, but in Beauty And Misogyny Jeffreys provides a unique analysis of what she describes as "men adopting the behaviours of a subordinate group in order to enjoy the sexual satisfaction of masochism". She says we need look no further than transvestite pornography, with titles such as Enforced Femininity and Forced To Grow Breasts, to understand how femininity and womanhood have been developed to ensure that women are seen as different and less powerful.
Jeffreys maintains that transsexual surgery is an extension of the beauty industry offering cosmetic solutions to deeper rooted problems. She argues that in a society in which there was no such thing as gender, there would be no need to undergo such surgery.
Jeffreys offers no comfort zone for her readers. Unlike some feminist theorists, she refuses to couch her arguments in inaccessible, academic language, or to accept that feminism has achieved its aims. For Jeffreys, the word "complicated" does not exist. The reason for women's oppression is horribly simple: men want their power and, for that reason, they will keep women in a state of subordination to maintain it. She tells me she will never give up. "I cannot imagine living without a purpose of changing the world for the better. It gives life meaning. It is more urgent now than ever. No liberation is possible for women in a world in which inequality is sexy."
· Beauty And Misogyny: Harmful Cultural Practices In The West, by Sheila Jeffreys, is published in paperback by Routledge at £12.95. | Going from an all-girls grammar school to Manchester University in the late 1960s, she expected the intellectual atmosphere of Left Bank cafes. What she found instead was young men sitting around the students' union bar making smutty jokes. She sank into a depression that lasted several years.
During this time, she started sleeping around with men, considering it her "duty" to be liberated and progressive. This was the start of her interest in the politics of the sexual revolution, which was to result in her book Anticlimax: A Feminist Perspective On The Sexual Revolution (1990), in which she argues that newly achieved and much-vaunted sexual freedom did not constitute any real gain for women, but continued their oppression in another guise.
After university, while teaching at a girls' boarding school in 1972, she read Kate Millet's Sexual Politics, a groundbreaking analysis of female oppression and patriarchy. She became "enraged" at what she learned about men's abuse and control of women.
"My rage has never gone away," she says now. "I am grateful for that." She says she distinctly remembers the moment she realised, during a conversation about politics with a man, that he was seeing her merely as a woman, and therefore inferior. "I was furious. He actually said I had the brain of a man, and while in the past I would have been flattered, a dam had burst and everything became clear." While many of her generation of radical feminists have given up fighting, Jeffreys' passion has not abated.
Her books have a common theme, whether she is writing about Victorian feminism, the sexual revolution of the 1960s, "queer" sexual politics or the history of prostitution. As Jeffreys puts it, "Male supremacy is centred on the act of sexual intercourse, justified by heterosexual practice." For her, heterosexual sex is sexual desire that eroticises power differences. | no |
Revolutions | Was the sexual revolution of the 1960s a liberating moment for women? | no_statement | the "sexual" "revolution" of the "1960s" was not a "liberating" "moment" for "women".. "women" did not find "liberation" during the "sexual" "revolution" of the "1960s". | https://s-usih.org/2015/02/politically-correct-a-history-part-i/ | “Politically Correct”: A History (Part I) | Society for US Intellectual ... | “Politically Correct”: A History (Part I)
This is the first post in a series in which I will explore the history of the descriptor “politically correct” as both a term of praise and a term of disparagement. [edited Feb. 16, 2015: Part II is posted here.]
This history – at least as I have been able to piece it together so far — begins not in the 1980s nor even in the 1960s. The use of the term “politically correct” as first an ideal and then an insult, first an aspiration and then an accusation, goes at least as far back as the 1930s. The double-edged connotation of the term was forged and sharpened in internal debates on the Left – debates between Communists and socialists in the 1930s and 1940s, between New Left radicals and radical feminists in the 1960s and 1970s, between radical feminists and “libertarian” (or, in current parlance, “sex-positive”) feminists in the 1970s and 1980s.
In any case, the pejorative use of the term “politically correct” is not a consequence of debates in the late 1980s or early 1990s; calling someone “politically correct” as a way of being insulting is a practice that goes back for several decades.
“I first heard the phrase ‘politically correct’ in the late 1940s,” Herbert Kohl wrote, “in reference to political debates between socialists and members of the United States Communist Party. These debates were an everyday occurrence in my neighborhood in the Bronx until the McCarthy committee and HUAC silenced political talk on the streets in the early 1950s. Before McCarthy, members of the CP called current party doctrine the ‘correct’ line for the moment.”[1]
This use of “politically correct” as a technical term within Communist circles to denote ideological conformity to official Party doctrine predated Kohl’s memory by more than a decade at least. This usage shows up, for example, in The Communist in 1930, in a resolution of support for recent CPUSA actions offered by American-Canadian students at the Lenin School in Moscow. These actions, the students affirmed, had “cleared the way for the correct application of the line of the Sixth World Congress and the Tenth Plenum of the U.S.A.”[2] The students commended the Party for recent statements demonstrating its “politically correct perspective,” but they expressed some concern over the extent to which the CPUSA was taking sufficient practical steps to implement that correct line.[3]
Concern about the distance between “politically correct” doctrines and practicable solutions comes through more clearly in a 1932 Communist article by Harrison George. In the piece, “Causes and Meaning of the Farmers’ Strike and Our Tasks as Communists,” George pushed back against both internal and international criticism of the CPUSA’s support for the United Farmers’ League:
The impoverished famers will fight. But for what demands and around what slogans? Around this question there has been a discussion which has been an obstacle rather than an aid to our Party….The comrades were against the U.F.L. program. They were also against the U.F.L. and desired it liquidated. They insisted that all things be revamped to conform with the program for European peasants adopted by the All-European Peasants’ Committee. We looked over the program, but are sure that few farmers would ever understand it. Of course, it is politically ‘correct’ to the last letter.[4]
The scare quotes around “correct” signified doubt about the value of political statements that adhered to “the letter” of Party pronouncements without regard to practical or tactical needs. And the matter of political correctness was, in this case at least, very much a matter of using approved language. George acknowledged that the language of the UFL program, by calling its activist units “Township Committees” rather than the officially approved term, “Committees of Action,” may have “obscured” the UFL program’s aims to its critics. However, he said, “we did not conceive this program of the U.F.L. as applying to the whole country, but to the Middle West where township organization is natural as a form of the united front. If we had said ‘Village’ Committees, our critics might have understood. But the Dakota farmers would not, and we wrote our program for them.”[5]
So within the CPUSA of the 1930s, there was some discussion about the desirability and possible deficiencies of “politically correct” discourse. These two examples in particular highlight perceived tensions between expectations of doctrinal purity and considerations of practical efficacy.
With the onset of World War II, Kohl argued, Communists who insisted on adhering to the “correct” Party line put themselves in the position of defending the morally indefensible.
During World War II, the Hitler-Stalin Pact caused many of these CP members considerable pain and often disgrace on my block–which was all Jewish and mostly socialist. The ‘correct’ position on Stalin’s alliance with Hitler (in favor) was considered to be ridiculous, a betrayal of European Jewry as well as of socialist ideals. Thereafter, I remember the term ‘politically correct’ being used disparagingly to refer to someone whose loyalty to the CP line overrode compassion and led to bad politics. It was used by the socialists against the communists, and was meant to separate out their own beliefs in egalitarian moral ideas from those of the dogmatic communists who would advocate and defend party positions regardless of their moral substance.[6]
Kohl, writing in 1991, argued that the contemporary, pejorative use of “politically correct” by neo-conservative critics of the academy was a deliberate effort to portray advocates of “anti-sexist and anti-racist education” in the same moral light as “Communist party hard-liners who insisted on the correct ‘line'” even when the Party line required defending the Hitler-Stalin pact – in other words, Kohl accused the neoconservatives of redbaiting. “It is a clever ploy on the part of neo-conservatives–a number of whom were themselves Communist Party members in the ’50s and are quite familiar with the term’s earlier use–to insinuate that egalitarian democratic ideas are actually authoritarian, orthodox, and communist influenced when they oppose racism, sexism, or homophobia.”
Ironically, this broad-brush invocation of an unnamed “number” of ex-Communists engaged in a “clever ploy” to paint multiculturalists as communist-influenced ideologues was itself a bit of (tongue-in-cheek?) redbaiting. That does not necessarily diminish the plausibility of Kohl’s explanation for what gave charges of “political correctness” in the 1990s at least some of their polemical punch. However, the valence of “politically correct” in the late ’80s and early ’90s was probably shaped by more recent intra-Left disputes (or more recent caricatures of such disputes).
One of the most significant intra-Left disputes informing discussions of the “politically correct” was the emergence of second wave feminism as a separate — and sometimes separatist — political force on the Left. As many scholars have noted, the rise of the women’s liberation movement in the late 1960s grew out of women activists’ rejection of the “male chauvinism” of the New Left.[7] Feminist critics pointed out that, as envisioned by the male leadership of the New Left, the radical politics of liberation was perfectly compatible with a traditional practice of subordinating women to men. Thus, for example, some feminists argued that the sexual revolution was not necessarily liberating for women, and that unconventional living arrangements often involved not so much a rejection of the bourgeois institution of marriage as a reinstantiation of its exploitative appropriation of women’s emotional, physical, and sexual labor. As part of her contribution to the manifesto, “Toward a Female Liberation Movement,” Judith Brown framed the issue in these terms:
The radical woman lives off-campus, away from her parents, and often openly with one man or another. She thinks this is ‘freedom.’ But if she shares a place with a man, she ‘plays marriage,’ which means that she cooks, cleans, does the laundry, and generally serves and waits. Hassles with parents or fear of the Dean of Women help to sustain the excitement—the romantic illusions about marriage she brought to the domicile.
If she shares an apartment with other women, it is arranged so that each may entertain men for extended visits with maximum privacy. Often, for the women, these apartments become a kind of bordello; for the men, in addition to that, a good place to meet for political discussion, to put up campus travelers, to grab a free meal, or sack out. These homes are not centers for female political activity; and rather than being judged for their interior qualities – physical or political – they are evaluated by other women in terms of the variety and status of the radical men who frequent them.[8]
A radical feminist writing in off our backs a few years later put the matter more pithily:
Males in the 1960s tried to turn daughters off to their mothers’ raps about how all men wanted was cunt—they turned this wise old woman knowledge that has been passed down from mother to daughter since the fall of the matriarchies into something that was known as unhip and unpolitical. The hippie chick became politically correct ass. The Male Left convinced ‘their’ women that it was politically correct to fuck their brains out. Non-monogamy as a political ideology in my lifetime grew up in that male context….And what the straight feminists have yet to realize who are still exploring non-monogamous relationships with men is that this is nothing new for men – they can still get free pussy if they’ll go soft on their chauvinism during foreplay.[9]
In this passage, the term “politically correct” may be less pejorative than sarcastic. In other words, the author is not necessarily criticizing the aspiration of some women to arrange their lifestyle in accordance with their political views. I think that’s what “politically correct” means here – putting one’s principles into practice. Rather, this author is criticizing how men on the Left have misused these women’s aspiration to put their politics into practice. But the aspiration itself – an aspiration born of the recognition that “the political is personal” – does not come under censure.
In my next post, I will pick up where I have left off here, with an examination of how radical feminists used the term “politically correct,” and what they meant by it. It seems to me that a key moment in this history (and perhaps a key moment in taking these intra-Left conversations to a broader audience) was the 1982 conference on sexuality at Barnard – especially the controversial panel on “Politically Correct, Politically Incorrect Sexuality.”[10]
But I am still sorting this out — as I said at the start of this post, I am exploring this history, not handing it down as a settled matter. So any suggestions or critiques would be much appreciated.
The Society for U.S. Intellectual History is a nonpartisan educational organization. The opinions expressed on the blog are strictly those of the individual writers and do not represent those of the Society or of the writers’ employers.
Categories
Authors
9 Thoughts on this Post
S-USIH Comment Policy
We ask that those who participate in the discussions generated in the Comments section do so with the same decorum as they would in any other academic setting or context. Since the USIH bloggers write under our real names, we would prefer that our commenters also identify themselves by their real name. As our primary goal is to stimulate and engage in fruitful and productive discussion, ad hominem attacks (personal or professional), unnecessary insults, and/or mean-spiritedness have no place in the USIH Blog’s Comments section. Therefore, we reserve the right to remove any comments that contain any of the above and/or are not intended to further the discussion of the topic of the post. We welcome suggestions for corrections to any of our posts. As the official blog of the Society of US Intellectual History, we hope to foster a diverse community of scholars and readers who engage with one another in discussions of US intellectual history, broadly understood.
L.D. That is an excellent introduction and summary of the phrase or term. The writings of Chairman Mao are filled with the term correct, as in “correct handling of contradictions among the people”. There was, at least prior to the revelations of the crimes of that regime, enormous sympathy with the Maoist line, especially but not exclusively, among radical Feminists who were also part of movements for socialist transformation and even sought ways to combine their partial separatism with integrated Marxism.
I myself remain troubled and ambivalent about ALL of these movements and especially this language which I consider a sort of turgid jargon that recreates the worst aspects of scientific and technocratic writing. I find it sad, maybe even tragic, that “criticism” of PC seems to come only in the form of neo-conservatism and conservatives for they have very much an agenda that is quite extreme.
I recall that in the 1990s my fondness for Fred and Ginger movies was called politically incorrect at an academic cocktail party on the (as in morally wrong) – on the grounds that the dancing style was anti-woman (because of Astaire leading? I am not sure) etc. this was quite serious and was not meant in a self deprecatory way and in some socialist groups I was in there were reading lists of disapproved and approved books.
Mitch, thank you so much for this comment!!!! I sure hope you saved some of the ephemera from those socialist reading groups. Good primary sources!
I had a section on Maoist discussions of the “politically correct,” but I decided not to go with it, because (at this point in my research), my ability to connect Maoist terminology to the US “p.c.” discussion is inferential. That is, I have secondary sources (e.g., Kazin, Crow) who mention that the New Left and women’s liberation feminists were influenced by the work of Mao. But without some good examples from, say, RAT or Ramparts or The New Left News (that’s some of the legwork I need to do to turn this from an exploration into an article), I didn’t want to claim too much for Maoist borrowings. I guess I should have been more brave!
The key — a key — with Maoist uses of “politically correct” is the connection to the Great Leap Forward, the Cultural Revolution, etc. That is, not just an insistence on p.c.political doctrine, or even p.c. art, or p.c. literature, but a notion that everything and anything in life/society could be politically correct or politically incorrect. That kind of comprehensive vision of culture as politics would be the key influence. But again, that’s something I want to track down better in primary sources.
On Maoist enforcements of “political correctness,” I found these articles helpful:
My sense (so far) is that there were always hardcore believers on the radical Left who were serious about “political correctness” in that Party line sense, but from the sources I’ve looked at so far, most people on the Left who are using that term are using it with what appears to be a lot of eye-rolling and sarcasm.
In terms of what the term “politically correct” meant or contained or embraced for those who used it seriously, that shifts on the Left at this time, and I think feminism is key here — “the personal is political,” the idea of sexual politics, the dialectic of sex, etc.
Re Mao: a few years ago Xavier Marquez (a political scientist in New Zealand) wrote a long blog post on the personality cult in Mao’s China (riffing off a book on the subject). Been quite a while since I read the post; in glancing at it just now, I see there is a reference to ‘correct’ and ‘incorrect’ cults of personality. Anyway, I’ll give the link to the post below (fwiw).
Louis, thanks for the link. That blog post provides one of the better examples of how ngrams can be useful that I’ve encountered. Plus it’s fascinating.
The extent to which being “politically correct” in U.S contexts was about signaling v. the extent to which it was viewed as a pragmatic/effectual matter is an interesting question. Not sure yet if I’ll get to it in the next post, but worth thinking about.
This week my primary source work is mostly in the writings of radical feminists of various “camps.” My big takeaway so far: radical feminists were radical. (This observation brought to you by “Stating the Obvious, with L.D. Burnett”™)
This essay, just as the Marxist Movement in general, loses the original and most important definition of ‘political correctness’ and fails to define what a ‘correct political line’ signifies or what it was used for and substitutes a useless petty bourgeois misinterpretation of the term for discussion. The distinction is 100% important.
In the example covering women’s liberation, it implies the struggle against American imperialism takes presidence over male chauvanism. However, this is never really made explicit. The only thing that is made explicit is the question if women should work in organizations with or without men? It was deemed by the New Left it seems that men and women should participate in the same organizations and feminists insisted on separate organizations. Therefore, ‘political correctness’ was deemed anti-feminists by modernist feminists.
First, ‘political correctness’ is not a political position or our line. It is the assessment of a political position after a period of time, sort of like the conclusions of an experiment. In general, if the revolution succeeded then the line was correct. If it was crushed or the Communist Party was completely ineffectual, then the line was incorrect.
From this point of view we can safely conclude all of the organizations, the CPUSA, the Black Panthers and the Weather Underground etc. were ALL INCORRECT. The obvious resolution to the organizing principles for women would be – vanguard female revolutionaries should join organizations that include men or do not include men as fits their personal disposition. Contradiction resolved, end of story.
The concept that there are only two mutually opposite answers to a question is binary modernism. The false conclusion that the adoption of one solution means that another solution must be wrong by definition.
Barton, this (and the following blog post) is a first pass at a historical inquiry looking at shifts in usage/meaning of the term “politically correct” in American discourse.
Based on the sources I have cited here from the 1930s, I have argued that PC was used at the time as “a technical term within Communist circles to denote ideological conformity to official Party doctrine,” but also sometimes used in ways that pointed to “perceived tensions between expectations of doctrinal purity and considerations of practical efficacy.” I don’t think that’s a particularly strained or tendentious reading.
Do you have some examples of texts from the 1930s U.S. that show the term used in a different sense at that time? That would be very helpful for me in my research. And of course it would improve your self-described mansplaining immensely if you could cite some actual historical evidence to support your claims. Historical evidence is super important for us bourgeois historians — we’re kinda funny that way.
As a linguist, I am fascinated by the use of positives as negatives, and vice versa and how language clusters form. Correct becomes incorrect, bad becomes good. I will read the follow up. In my observation, such meaning switches always have social tensions at their root. | As many scholars have noted, the rise of the women’s liberation movement in the late 1960s grew out of women activists’ rejection of the “male chauvinism” of the New Left.[7] Feminist critics pointed out that, as envisioned by the male leadership of the New Left, the radical politics of liberation was perfectly compatible with a traditional practice of subordinating women to men. Thus, for example, some feminists argued that the sexual revolution was not necessarily liberating for women, and that unconventional living arrangements often involved not so much a rejection of the bourgeois institution of marriage as a reinstantiation of its exploitative appropriation of women’s emotional, physical, and sexual labor. As part of her contribution to the manifesto, “Toward a Female Liberation Movement,” Judith Brown framed the issue in these terms:
The radical woman lives off-campus, away from her parents, and often openly with one man or another. She thinks this is ‘freedom.’ But if she shares a place with a man, she ‘plays marriage,’ which means that she cooks, cleans, does the laundry, and generally serves and waits. Hassles with parents or fear of the Dean of Women help to sustain the excitement—the romantic illusions about marriage she brought to the domicile.
If she shares an apartment with other women, it is arranged so that each may entertain men for extended visits with maximum privacy. Often, for the women, these apartments become a kind of bordello; for the men, in addition to that, a good place to meet for political discussion, to put up campus travelers, to grab a free meal, or sack out. | no |
Archaeology | Was the tomb of Jesus discovered? | yes_statement | the "tomb" of jesus was "discovered".. it has been confirmed that the "tomb" of jesus was "discovered".. archaeologists have found the "tomb" of jesus. | https://www.smithsonianmag.com/smart-news/mortar-found-jesus-tomb-dates-constantine-era-180967345/ | Mortar Found at "Jesus' Tomb" Dates to the Constantine Era | Smart ... | Mortar Found at “Jesus’ Tomb” Dates to the Constantine Era
The Church of the Holy Sepulchre's Edicule, a shrine that encloses Jesus’ purported resting place
Michael Privorotsky CC 2.0
In the year 325 A.D., according to historical sources, Constantine, Rome’s first Christian emperor, sent an envoy to Jerusalem in the hopes of locating the tomb of Jesus of Nazareth. His representatives were reportedly told that Jesus’ burial place lay under a pagan temple to Venus, which they proceeded to tear down. Beneath the building, they discovered a tomb cut from a limestone cave. Constantine subsequently ordered a majestic church—now known as the Church of the Holy Sepulchre—to be built at the site.
Over the centuries, the Church of the Holy Sepulchre has been razed during regional conflicts, consumed by a fire and rattled by an earthquake—only to be resurrected after each catastrophe. Because of the church’s tumultuous history, experts have questioned whether the tomb was at some point removed or destroyed, reports Keir Simmons of NBC News. Previously, the earliest archaeological evidence found at the site of the tomb dated to the Crusader period, about 1,000 years ago.
Then, in 2016, the tomb was opened for the first time in centuries, when experts from the National Technical University of Athens began a much-need restoration of the Edicule, a shrine that encloses Jesus’ purported resting place. There, the team discovered the original limestone walls and a “burial bed,” or long shelf where Jesus’ body would have been laid after his crucifixion, according to Christian tradition.
The tomb was open for just 60 hours, during which time researchers took samples of mortar that had been sandwiched between the burial bed and a cracked marble slab adorned with a cross. Researchers thought the slab was likely laid down during the Crusader period, or perhaps not long before the church was destroyed by the Fatimid Caliph of Egypt in 1009, but they needed to test the samples.
Now, Kristin Romey reports in a National Geographicexclusive, that testing of mortar slathered over the limestone cave lends credence to historical accounts of the tomb’s discovery by the Romans. The mortar has been dated to approximately 345 A.D., which falls “securely in the time of Constantine,” Romey writes.
To test the mortar samples, researchers relied on optically stimulated luminescence (OSL), a technique that is able to determine the last time quartz sediment was exposed to light. And the results suggested that the marble slab was in fact laid down during the Roman period, conceivably under the direction of emperor Constantine.
“Obviously that date is spot-on for whatever Constantine did," archaeologist Martin Biddle, author of The Tomb of Christ, an important texton the Church of the Holy Sepulchre, tells Romey. "That's very remarkable."
The project's chief scientific supervisor Antonia Moropoulou and her team will publish their complete findings on the samples in an upcoming issue of the Journal of Archaeological Science: Reports. The National Geographic Channel will also air a documentary titled "Secrets of Christ’s Tomb" on December 3. | Mortar Found at “Jesus’ Tomb” Dates to the Constantine Era
The Church of the Holy Sepulchre's Edicule, a shrine that encloses Jesus’ purported resting place
Michael Privorotsky CC 2.0
In the year 325 A.D., according to historical sources, Constantine, Rome’s first Christian emperor, sent an envoy to Jerusalem in the hopes of locating the tomb of Jesus of Nazareth. His representatives were reportedly told that Jesus’ burial place lay under a pagan temple to Venus, which they proceeded to tear down. Beneath the building, they discovered a tomb cut from a limestone cave. Constantine subsequently ordered a majestic church—now known as the Church of the Holy Sepulchre—to be built at the site.
Over the centuries, the Church of the Holy Sepulchre has been razed during regional conflicts, consumed by a fire and rattled by an earthquake—only to be resurrected after each catastrophe. Because of the church’s tumultuous history, experts have questioned whether the tomb was at some point removed or destroyed, reports Keir Simmons of NBC News. Previously, the earliest archaeological evidence found at the site of the tomb dated to the Crusader period, about 1,000 years ago.
Then, in 2016, the tomb was opened for the first time in centuries, when experts from the National Technical University of Athens began a much-need restoration of the Edicule, a shrine that encloses Jesus’ purported resting place. There, the team discovered the original limestone walls and a “burial bed,” or long shelf where Jesus’ body would have been laid after his crucifixion, according to Christian tradition.
The tomb was open for just 60 hours, during which time researchers took samples of mortar that had been sandwiched between the burial bed and a cracked marble slab adorned with a cross. Researchers thought the slab was likely laid down during the Crusader period, or perhaps not long before the church was destroyed by the Fatimid Caliph of Egypt in 1009, | yes |
Archaeology | Was the tomb of Jesus discovered? | yes_statement | the "tomb" of jesus was "discovered".. it has been confirmed that the "tomb" of jesus was "discovered".. archaeologists have found the "tomb" of jesus. | https://www.pbs.org/newshour/science/archaeologists-excavate-jesuss-midwife-tomb-in-israel | Archaeologists excavate 'Jesus's midwife' tomb in Israel | PBS ... | Archaeologists excavate ‘Jesus’s midwife’ tomb in Israel
JERUSALEM (AP) — An ancient tomb traditionally associated with Jesus’s midwife is being excavated anew by archaeologists in the hills southwest of Jerusalem, the antiquities authority said Tuesday.
The intricately decorated Jewish burial cave complex dates to around the first century A.D., but it was later associated by local Christians with Salome, the midwife of Jesus in the Gospels. A Byzantine chapel was built at the site, which was a place of pilgrimage and veneration for centuries thereafter.
The cave was first found and excavated decades ago by an Israeli archaeologist. The cave’s large forecourt is now under excavation by archaeologists as part of a heritage trail development project in the region.
A man holds a clay lamp that, according to The Israel Antiquities Authority, was discovered near the 2,000-year-old burial cave of Jesus’s midwife, Salome, and may have been used as part of religious ceremonies in the Lachish Forest in Israel. Photo by Ammar Awad/Reuters
Crosses and inscriptions in Greek and Arabic carved in the cave walls during the Byzantine and Islamic periods indicate that the chapel was dedicated to Salome.
Pilgrims would “rent oil lamps, enter into the cave, used to pray, come out in give back the oil lamp,” said Ziv Firer, director of the excavation. “We found tens of them, with beautiful decorations of plants and flowers.”
Left:
A view shows a cave that, according to The Israel Antiquities Authority is the 2000-year-old burial cave of Jesus' midwife, Salome in the Lachish Forest in Israel December 20, 2022. REUTERS/Ammar Awad | Archaeologists excavate ‘Jesus’s midwife’ tomb in Israel
JERUSALEM (AP) — An ancient tomb traditionally associated with Jesus’s midwife is being excavated anew by archaeologists in the hills southwest of Jerusalem, the antiquities authority said Tuesday.
The intricately decorated Jewish burial cave complex dates to around the first century A.D., but it was later associated by local Christians with Salome, the midwife of Jesus in the Gospels. A Byzantine chapel was built at the site, which was a place of pilgrimage and veneration for centuries thereafter.
The cave was first found and excavated decades ago by an Israeli archaeologist. The cave’s large forecourt is now under excavation by archaeologists as part of a heritage trail development project in the region.
A man holds a clay lamp that, according to The Israel Antiquities Authority, was discovered near the 2,000-year-old burial cave of Jesus’s midwife, Salome, and may have been used as part of religious ceremonies in the Lachish Forest in Israel. Photo by Ammar Awad/Reuters
Crosses and inscriptions in Greek and Arabic carved in the cave walls during the Byzantine and Islamic periods indicate that the chapel was dedicated to Salome.
Pilgrims would “rent oil lamps, enter into the cave, used to pray, come out in give back the oil lamp,” said Ziv Firer, director of the excavation. “We found tens of them, with beautiful decorations of plants and flowers.”
Left:
A view shows a cave that, according to The Israel Antiquities Authority is the 2000-year-old burial cave of Jesus' midwife, Salome in the Lachish Forest in Israel December 20, 2022. | no |
Archaeology | Was the tomb of Jesus discovered? | yes_statement | the "tomb" of jesus was "discovered".. it has been confirmed that the "tomb" of jesus was "discovered".. archaeologists have found the "tomb" of jesus. | https://en.wikipedia.org/wiki/Tomb_of_Jesus | Tomb of Jesus - Wikipedia | The marble covering protecting the original limestone slab upon which Jesus was thought to have been laid by Joseph of Arimathea had been temporarily removed for restoration and cleaning on October 26, 2016, as a result revealing the original slab for the first time since 1555.[6]
The Talpiot Tomb (or Talpiyot Tomb) is a rock-cut tomb discovered in 1980 in the East Talpiot neighborhood, five kilometers (three miles) south of the Old City in East Jerusalem. It contained ten ossuaries, six inscribed with epigraphs, including one interpreted as "Yeshua bar Yehosef" ("Jeshua, son of Joseph"), although the inscription is partially illegible, and its translation and interpretation is widely disputed.[9] It is widely believed by scholars that the Jesus in Talpiot (if this is indeed his name) is not Jesus of Nazareth, but a person with the same name, since he appears to have a son named Judas (buried next to him) and the tomb shows signs of belonging to a wealthy Judean family, while Jesus of Nazareth came from a low-class Galilean family.[10]
The shrine was relatively unknown until the founder of the Ahmadiyya movement, Mirza Ghulam Ahmad, claimed in 1899 that it is actually the tomb of Jesus.[16] This view is maintained by Ahmadis today, though it is rejected by the local Sunni caretakers of the shrine, one of whom said "the theory that Jesus is buried anywhere on the face of the earth is blasphemous to Islam."[17]
Shingō village in Japan contains another location of what is purported to be the last resting place of Jesus, the so-called "Tomb of Jesus" (Kirisuto no haka), and the residence of Jesus' last descendants, the family of Sajiro Sawaguchi.[18] According to the Sawaguchi family's claims, Jesus Christ did not die on the cross at Golgotha. Instead his brother, Isukiri,[19] took his place on the cross, while Jesus fled across Siberia to Mutsu Province, in northern Japan. Once in Japan, he changed his name to Torai Tora Daitenku, became a rice farmer, married a twenty-year old Japanese woman named Miyuko, and raised three daughters near what is now Shingō. While in Japan, it is asserted that he traveled, learned, and eventually died at the age of 106. His body was exposed on a hilltop for four years. According to the customs of the time, Jesus' bones were collected, bundled, and buried in the mound purported to be the grave of Jesus Christ.[20][21]
^Ghulām Muhyi'd Dīn Sūfī Kashīr, being a history of Kashmir from the earliest times to our own 1974 – Volume 2 – Page 520 "Bal, in Kashmiri, means a place and is applied to a bank, or a landing place."
^B. N. Mullik – My years with Nehru: Kashmir – Volume 2 1971 – Page 117 "Due to the presence of the Moe-e-Muqaddas on its bank the lake gradually acquired the name Hazratbal (Bal in Kashmiri means lake) and the mosque came to be known as the Hazratbal Mosque. Gradually the present Hazratbal village grew ..."
^Nigel B. Hankin Hanklyn-janklin: a stranger's rumble-tumble guide to some words 1997 Page 125 (Although bal means hair in Urdu, in this instance the word is Kashmiri for a place – Hazratbal – the revered place.) HAZRI n Urdu Lit. presence, attendance. In British days the word acquired the meaning to Europeans and those associated with ..."
^Andrew Wilson The Abode of Snow: Observations on a Journey from Chinese Tibet to ... 1875 reprint 1993– Page 343 Bal means a place, and Ash is the satyr of Kashmir traditions."
^J. Gordon MeltonThe Encyclopedia of Religious Phenomena 2007 "Ahmad specifically repudiated Notovitch on Jesus' early travels to India, but claimed that Jesus did go there late in His life. The structure identified by Ahmad as Jesus' resting place is known locally as the Roza Bal (or Rauza Bal)." | The marble covering protecting the original limestone slab upon which Jesus was thought to have been laid by Joseph of Arimathea had been temporarily removed for restoration and cleaning on October 26, 2016, as a result revealing the original slab for the first time since 1555.[6]
The Talpiot Tomb (or Talpiyot Tomb) is a rock-cut tomb discovered in 1980 in the East Talpiot neighborhood, five kilometers (three miles) south of the Old City in East Jerusalem. It contained ten ossuaries, six inscribed with epigraphs, including one interpreted as "Yeshua bar Yehosef" ("Jeshua, son of Joseph"), although the inscription is partially illegible, and its translation and interpretation is widely disputed.[9] It is widely believed by scholars that the Jesus in Talpiot (if this is indeed his name) is not Jesus of Nazareth, but a person with the same name, since he appears to have a son named Judas (buried next to him) and the tomb shows signs of belonging to a wealthy Judean family, while Jesus of Nazareth came from a low-class Galilean family.[10]
The shrine was relatively unknown until the founder of the Ahmadiyya movement, Mirza Ghulam Ahmad, claimed in 1899 that it is actually the tomb of Jesus.[16] This view is maintained by Ahmadis today, though it is rejected by the local Sunni caretakers of the shrine, one of whom said "the theory that Jesus is buried anywhere on the face of the earth is blasphemous to Islam. "[17]
Shingō village in Japan contains another location of what is purported to be the last resting place of Jesus, the so-called "Tomb of Jesus" (Kirisuto no haka), and the residence of Jesus' last descendants, the family of Sajiro Sawaguchi.[18] According to the Sawaguchi family's claims, Jesus Christ did not die on the cross at Golgotha. | no |
Archaeology | Was the tomb of Jesus discovered? | yes_statement | the "tomb" of jesus was "discovered".. it has been confirmed that the "tomb" of jesus was "discovered".. archaeologists have found the "tomb" of jesus. | https://aleteia.org/2023/07/04/st-peters-tomb-when-science-confirms-tradition/ | St. Peter's tomb: When science confirms Tradition | St. Peter’s tomb: When science confirms Tradition
A true story of lost tombs, inscriptions in ancient languages, and human bones, worthy of Indiana Jones.
Oral tradition has always taught that the Emperor Constantine built the first St. Peter’s Basilica on the same site as the tomb of the first pope. Until the 20th century no attempt had been made to validate or invalidate this belief. People of faith relied instinctively on the teaching handed down from generation to generation. But in 1939, a fortuitous event triggered archaeological excavations and involved Christians and scientists in a veritable Indiana Jones saga.
Today’s St. Peter’s Basilica has a lower level called “the Vatican Grottoes,” where many popes are buried. The floor level of these caves corresponds approximately to that of the first basilica, built by Emperor Constantine in the 4th century.
The initial discovery
At his death in 1939, Pius XI expressed the wish to be buried in these grottoes, together with Pius X. Despite the overcrowded conditions of the site, the new Pope Pius XII was eager to respect the last wishes of his predecessor. So he decided to lower the pavement of the caves, in order to enlarge the space dedicated to the future mausoleum. While carrying out the work, the workers discovered an empty space under the pavement where the remains of a funerary building could be seen. Thus a third level appeared, that of a vast Roman necropolis.
Pius XII had always been interested in Christian archaeology, seeing in it, rightly, an excellent way to bring to life the writings of the proto-Christian era. Consequently, he ordered the continuation of research and launched archaeological excavations with the best specialists, archaeologists and exegetes working together.
In total, two excavation projects (1940-1947 and 1953-1957) made it possible to explore, under the basilica, one of the richest and best preserved Roman necropolises, dating back to the 1st and 2nd centuries AD. Archaeologists discovered 22 large tombs, as well as hundreds of smaller tombs, on both sides of a narrow alley.
But the most spectacular discovery, the one that has since aroused historical attention and religious devotion, is of course the discovery of the tomb of St. Peter. In fact, archaeologists unearthed, directly beneath Bernini’s high altar, the remains of a small funerary monument built in the mid-2nd century and which has turned out to be, in all probability, the tomb of the first pope.
Burying the martyrs not far from their place of death
As early as the end of the 1st century, Christian sources mention the martyrdom of Peter in Rome: His arrest and execution took place under Nero, after the fire that ravaged the city in 64. The Acts of Peter, an apocryphal text, recounts the crucifixion of the apostle (with his head downwards) in the circus of Caligula, which had just been restored by Nero.
This circus was located on the outskirts of Rome at the foot of the Vatican hill, on which a vast cemetery extended where pagan and Christian tombs rubbed shoulders. There’s no doubt that the Christian community of Rome came to claim the body of Peter and bury him with dignity, as authorized by Roman law after an execution.
At that time, Christians used to bury the martyrs in the vicinity of the place of their death, no doubt to facilitate the transmission of the memory of the places. On the other hand, they were still following the Jewish law, which prescribes that the deceased should be buried as soon as possible. Therefore, Peter was certainly, like Christ, buried in the cemetery closest to the place of his execution, in a place that belonged to a Christian.
Early testimonies
The first mention that has come down to us of the tomb of Peter on the Vatican hill dates from around the year 200. It’s found in a letter sent by the priest Gaius to a certain Proclus. He explains that the apostles Peter and Paul are buried in Rome, the former in the Vatican, the latter on the road to Ostia: “Obviously I can show you the trophies of the Apostles. If you want to go to the Vatican or on the way to Ostia, you will find the trophies of those who founded the Roman Church.” In this text, the word “trophy” designates the monument built over the tomb, a monument that represents the reward of the martyr, the victory of eternal life over death.
As Christophe Dickès explains in his excellent biography of St. Peter, the text of Gaius and the monument discovered under the basilica confirm each other, and add credibility to the tradition of the Church, the fruit of a very long oral and written tradition dating back to the first century.
Ancient bones
But the story doesn’t end there. One day in 1941, Msgr. Kaas, one of those in charge of the excavation, was making his daily tour in the company of the leader of the workers. Out of respect for the deceased, Msgr. Kaas was particularly attentive to ensure that the numerous human bones unearthed were collected and piously preserved.
Thus, each batch of bones was carefully logged and recorded in an individual box. While taking stock of the day’s work, the two men discovered human bones in a sort of secret cavity, or loculus, excavated in one of the walls of the now well-known small funerary monument, which became known as the “trophy of Gaius.”
This cavity is striking because it’s lined with marble slabs. Msgr. Kaas carefully collected the one hundred human bones and placed them in a numbered box, which was then placed together with the other boxes in a shed. It was then forgotten for several years.
Absolute surprise
During the second period of excavation, an epigrapher removed the box and sent the bones to the laboratory. The analysis revealed that they belong to a single male individual, of robust constitution despite his arthritis, aged 60 to 70 years at the time of his death. This description could very well correspond to Peter.
Along with these analyses, the epigrapher noted on the wall of the loculus an inscription in Greek — Petro Eni — that can be translated as “Peter is here” or “Peter rests in peace.” These discoveries resounded like thunder in the scientific world and in the Christian world: Archaeologists believed they have found not only the tomb of St. Peter, but also his holy remains.
Therefore, archaeological findings, architectural findings, and biological analyses are consistent with the ancient testimony of Gaius. But, even more simply, all this work and research is in agreement with the Tradition of the Church. In fact, the Basilica of Constantine was built on and around the tomb of Peter. The people of the Renaissance, at the time of the reconstruction of the basilica, perfectly respected the popular belief, despite not having any material evidence. And, centuries later, the venerable tomb was found directly below Michelangelo’s dome and Bernini’s altar.
The relics of the holy apostle and first pope were exposed for the veneration of the faithful for the first time in 2013. Six years later, Pope Francis, in a profound gesture of unity, offered part of the relics of St. Peter to Patriarch Bartholomew. Now honored in Rome and Constantinople, these relics constitute a strong historical bond between Catholics and Orthodox.
Pope Francis kisses the relics of the Apostle Peter on the altar during a mass at St. Peter’s Square at the Vatican | This cavity is striking because it’s lined with marble slabs. Msgr. Kaas carefully collected the one hundred human bones and placed them in a numbered box, which was then placed together with the other boxes in a shed. It was then forgotten for several years.
Absolute surprise
During the second period of excavation, an epigrapher removed the box and sent the bones to the laboratory. The analysis revealed that they belong to a single male individual, of robust constitution despite his arthritis, aged 60 to 70 years at the time of his death. This description could very well correspond to Peter.
Along with these analyses, the epigrapher noted on the wall of the loculus an inscription in Greek — Petro Eni — that can be translated as “Peter is here” or “Peter rests in peace.” These discoveries resounded like thunder in the scientific world and in the Christian world: Archaeologists believed they have found not only the tomb of St. Peter, but also his holy remains.
Therefore, archaeological findings, architectural findings, and biological analyses are consistent with the ancient testimony of Gaius. But, even more simply, all this work and research is in agreement with the Tradition of the Church. In fact, the Basilica of Constantine was built on and around the tomb of Peter. The people of the Renaissance, at the time of the reconstruction of the basilica, perfectly respected the popular belief, despite not having any material evidence. And, centuries later, the venerable tomb was found directly below Michelangelo’s dome and Bernini’s altar.
The relics of the holy apostle and first pope were exposed for the veneration of the faithful for the first time in 2013. Six years later, Pope Francis, in a profound gesture of unity, offered part of the relics of St. Peter to Patriarch Bartholomew. | no |
Archaeology | Was the tomb of Jesus discovered? | no_statement | the "tomb" of jesus was not "discovered".. there is no evidence to suggest that the "tomb" of jesus was "discovered".. the discovery of the "tomb" of jesus has not been confirmed. | https://enduringword.com/bible-commentary/luke-24/ | Enduring Word Bible Commentary Luke Chapter 24 | Audio for Luke 24:
A. The resurrection of Jesus is discovered.
1. (1-3) Women followers of Jesus discover the empty tomb of Jesus.
Now on the first day of the week, very early in the morning, they, and certain other women with them, came to the tomb bringing the spices which they had prepared. But they found the stone rolled away from the tomb. Then they went in and did not find the body of the Lord Jesus.
a. Now on the first day of the week, very early in the morning: Jesus was crucified on Friday (or on Thursday by some accounts). After His entombment, the tomb was sealed and guarded by Roman soldiers (Matthew 27:62-66). The tomb stayed sealed and guarded until discovered by these women on the first day of the week, very early in the morning.
i. A rich man like Joseph of Arimethea would likely have a tomb carved into solid rock; this tomb was in a garden near the place of crucifixion (John 19:41). The tomb would have a small entrance and perhaps one or more compartments where bodies were laid out after being wrapped with linen strips smeared with spices, aloes, and ointments. Customarily, the Jews left these bodies alone for a few years until they decayed down to the bones, then the bones were placed in a small stone box known as an ossuary. The ossuary remained in the tomb with the remains of other family members.
ii. The entrance to the tomb was blocked by a heavy circular shaped stone, securely rolled in a channel, so only several strong men could move it. This was done to ensure that no one would disturb the remains.
iii. John 19:42 specifically tells us that the tomb of Joseph of Arimethea that Jesus was laid in was close to the place of Jesus’ crucifixion (and each of the two suggested places for Jesus’ death and resurrection bear this out). Joseph probably didn’t like it that the value of his family tomb decreased because the Romans decided to crucify people nearby; yet it reminds us that in God’s plan, the cross and the power of the resurrection are always permanently and closely connected.
iv. “This became the day of Christian worship (cf. Acts 20:7). The change from the traditional and biblical Sabbath is in itself a strong evidence of the Resurrection because it shows the strength of the disciples’ conviction about what happened on that day.” (Liefeld)
b. They, and certain other women with them: These women are of special note. They refers to the women from Galilee who saw Jesus put in the tomb (Luke 23:55-56). Luke agrees with Mark 15:47 and Matthew 27:61 that they included Mary Magdalene and Mary the mother of James (Luke 24:10). The certain other women with them included Joanna, (Luke 24:10) and others, unnamed (and the other women with them, Luke 24:10).
i. “These women came first, by a wonderful providence, before the apostles, to confute that impudent lie made by the priests, that the disciples had stolen the body away.” (Trapp)
c. Came to the tomb bringing the spices which they had prepared: The body of Jesus was hastily prepared for burial by Joseph of Arimathea and Nicodemus (John 19:38-41). The women came to properly complete the hurried job performed immediately after Jesus’ death.
i. Mark 16:3 tells us that the women discussed the problem of what to do with the heavy stone blocking the entrance to the tomb.
d. But they found the stone rolled away from the tomb. Then they went in and did not find the body of the Lord Jesus: The actual event of Jesus’ resurrection is nowhere described, but the discovery of it is recorded in some detail. Here, the women who intended to give Jesus’ body a more proper burial discover that the stone was rolled away from the tomb, and that the body of Jesus was not inside the tomb.
i. “This lack of spectacular detail itself speaks for the historicity of the New Testament documents. There is no attempt on the part of the writers to embellish the event of the Resurrection.” (Pate)
ii. Matthew 27:65-66 reminds us that there was a guard set round the tomb. The stone could not have been rolled away by the women (they were not strong enough) or by the disciples (even if they were brave enough, they could not overcome the armed guards). No one else would have wanted to roll away the stone, and Matthew 28:2 tells us that it was an angel who rolled it away.
iii. The stone was not rolled away to let Jesus out. John 20:19 tells us that Jesus, in His resurrection body, could pass through material barriers. The stone was rolled away away so that others could see in and be persuaded that Jesus Christ was and is risen from the dead.
2. (4-8) The angelic announcement of the resurrection.
And it happened, as they were greatly perplexed about this, that behold, two men stood by them in shining garments. Then, as they were afraid and bowed their faces to the earth, they said to them, “Why do you seek the living among the dead? He is not here, but is risen! Remember how He spoke to you when He was still in Galilee, saying, ‘The Son of Man must be delivered into the hands of sinful men, and be crucified, and the third day rise again.’” And they remembered His words.
a. As they were greatly perplexed about this: Once the women saw the stone rolled away and the tomb empty, their immediate reaction was that they were greatly perplexed. They did not expect to find an empty tomb. This shows that the resurrection accounts cannot be the product of wishful thinking; they were not even expecting that it could happen.
b. Two men stood by them in shining garments: Even as angels announced the birth of Jesus, (Luke 2:8-15) so they also announced the resurrection of Jesus. The announcement of His birth was made to a few humble people, considered unimportant by the culture; His resurrection announced by angels to a few women.
c. Why do you seek the living among the dead? This was a wonderfully logical question. The angels seemed almost surprised that the women were surprised; after all, the angels had heard what Jesus said regarding His resurrection, and they knew the women had heard it also. They naturally wondered why the women were surprised.
i. “Jesus is not to be thought of as dead: therefore he is not be sought among the dead.” (Morris)
ii. “As places of burial were unclean, it was not reasonable to suppose that the living should frequent them; or that if any was missing he was likely to be found in such places.” (Clarke)
iii. The angels’ question made a point: the living are not to be found among the dead. We should not expect spiritual life among those who do not have it. Many look for Jesus in dead things – religious traditionalism, formalism, man’s rules, human effort and ingenuity. We find Jesus only where there is resurrection life, where He is worshipped in Spirit and in truth.
d. He is not here: These were some of the most beautiful and important words ever spoken by an angel to men. One may look all over Jerusalem and see countless thousands of tombs, but one will never find the tomb of Jesus – because He is not here.
i. Every so often someone claims to have found evidence of the tomb of Jesus or the bones of Jesus. Each claim is found to be untrue, while the testimony of the angels is proved true over and over again: He is not here.
ii. Even the beginning of the resurrection account refutes many of the false alternative theories suggested by some.
· The wrong tomb theory is answered by Luke 23:55; the women knew exactly which tomb Jesus was buried in.
· The wishful thinking theory is answered by Luke 24:4 and 24:11, which note the surprise of the women and the disciples of the news of Jesus’ resurrection.
· The animals-ate-the-body theory is answered by the presence of the stone (Luke 24:2).
· The swoon theory is answered by the presence of the stone (Luke 24:2).
· The grave robber theory is answered by the presence of the Roman guard and seal (Matthew 27:62-66).
e. The Son of Man must be delivered into the hands of sinful men, and be crucified, and on the third day rise again: To the women, it must have seemed like a long time ago that Jesus said these words (Luke 18:31-33). Nevertheless, they needed to remember them and the angels remind them of what Jesus said.
i. Must is the critical word here; just as much as the crucifixion of Jesus was necessary and ordained, so was His resurrection. Jesus would have never come to the place of Calvary unless there was also an empty tomb of resurrection there.
f. And they remembered His words: The first notes of hope were sounded in the hearts of the women when they remembered Jesus’ words. The empty tomb, the presence of angels, the words of the angels in and of themselves could not change their hearts – but His words could change and cheer their hearts.
3. (9-11) The women tell the apostles and are not believed.
Then they returned from the tomb and told all these things to the eleven and to all the rest. It was Mary Magdalene, Joanna, Mary the mother of James, and the other women with them, who told these things to the apostles. And their words seemed to them like idle tales, and they did not believe them.
a. Then they returned from the tomb and told all these things to the eleven and to all the rest: The women who saw the evidence of the resurrected Jesus and remembered His words were excited about what seemed to be the most wonderful news possible – that Jesus was alive and had triumphed over death.
i. They would not be excited if Jesus had only somehow miraculously survived the ordeal of the cross. The news that He was alive meant so much more to them than knowing Jesus was a survivor; it meant He was the conqueror over death and that He was everything they had hoped for and more.
b. It was Mary Magdalene, Joanna, Mary the mother of James, and the other women with them: These were the women mentioned in Luke 24:1 as those who discovered the empty tomb. Three are mentioned specifically, and then an unnamed group of other women. These were given the privilege of being the first to tell others of the risen Jesus.
i. The only references to Mary Magdalene in the Gospels concern her as a witness of the crucifixion (Mark 15:40 and John 19:25) and of the resurrection (all four gospels) and as one from whom Jesus had cast out seven demons (Luke 8:2, Mark 16:9).
ii. Joanna is mentioned in Luke 8:2 as one of the women who accompanied Jesus and provided for His needs. She is also noted in Luke 8:3 as the wife of Chuza, who helped manage Herod’s affairs (a steward). She was likely a woman of privilege and resources.
iii. Mary the mother of James is only mentioned in connection with the resurrection appearances of Jesus. She was apparently the mother of one of the apostles, James the Less (not James the brother of John).
c. Their words seemed to them like idle tales, and they did not believe them: Despite their excitement, the testimony of the women was not believed. In fact, to the apostles, it seemed as if the women told idle tales, a medical word used to describe the babbling of a fevered and insane man (according to Barclay).
i. “In the first century the testimony of women was not deemed authoritative. Luke’s inclusion of the incident serves to emphasize his high regard for women.” (Pate)
ii. “The disciples were not men poised on the brink of belief and needing only the shadow of an excuse before launching forth into a proclamation of resurrection. They were utterly skeptical.” (Morris)
4. (12) The apostles come to believe.
But Peter arose and ran to the tomb; and stooping down, he saw the linen cloths lying by themselves; and he departed, marveling to himself at what had happened.
a. But Peter arose and ran to the tomb: We know from John 20:3-8 that both Peter and John ran to the tomb together. They saw grave clothes, but not as if they had been ripped off after a struggle. They saw the grave clothes of Jesus lying in perfect order, as if a body had just passed out of them (John 20:6-7). When John saw that, he believed, and Peter marveled. They had not seen the risen Jesus, but they knew that something powerful had happened to cause a body to leave behind the grave clothes in such a manner.
b. Marveling to himself at what had happened: Peter and John both observed what was in the tomb and John believed (John 20:8). This tells us that Peter analyzed the situation; he knew something spectacular had happened because of the condition of the grave clothes, but he because he had forgotten the words of Jesus (John 20:9), he did not yet understand and believe the way John had.
i. You can know that Jesus rose from the dead, but unless you know His words, it won’t make sense. Without knowing the life and teachings of Jesus:
· You don’t know that the resurrection means that the payment that Jesus offered on the cross was perfect and complete.
· You don’t know that the cross was the payment and the empty tomb is the receipt.
· You don’t know that death has no hold on redeemed man.
· You don’t know that when God’s love and man’s hate battled at the cross, God’s love won.
· You don’t know that because Jesus was raised from the dead, we can be resurrected in Him.
B. On the road to Emmaus.
1. (13-16) Jesus joins two disciples on a road.
Now behold, two of them were traveling that same day to a village called Emmaus, which was seven miles from Jerusalem. And they talked together of all these things which had happened. So it was, while they conversed and reasoned, that Jesus Himself drew near and went with them. But their eyes were restrained, so that they did not know Him.
a. Two of them were traveling that same day to a village called Emmaus: On this Sunday, these two disciples traveled to Emmaus from Jerusalem. As they walked together (probably returning from the Passover celebration in Jerusalem) it gave them opportunity to talk.
i. These weren’t famous apostles, they were simple and half-anonymous followers of Jesus. “I take it as characteristic of the Lord that in the glory of His resurrection life He gave Himself with such fullness of disclosure to these unknown and undistinguished men… He still reveals Himself to lowly hearts. Here is the Saviour for the common man. Here is the Lord who does not spurn the humble.” (Morrison)
ii. “There is considerable uncertainly about the original location of the village of Emmaus. Luke mentions that it was about seven miles (literally, ‘sixty stadia’) from Jerusalem. If he meant round-trip, the reference would fit rather nicely with a town Josephus identified as Emmaus, which he located thirty stadia from Jerusalem.” (Pate)
iii. “Luke almost certainly obtained his information from one of the two disciples, and probably in writing. The account has all the effect of personal experience.” (Plummer, cited in Geldenhuys)
b. They conversed and reasoned: As they talked, they spoke of the things that were biggest on their hearts – all of these things which had happened, the things regarding the arrest and crucifixion of Jesus.
c. Jesus Himself drew near and went with them: Jesus came along side these disciples, and went with them for a while. Yet for a time they were miraculously prevented from seeing who Jesus was.
i. “When two saints are talking together, Jesus is very likely to come and make the third one in the company. Talk of him, and you will soon talk with him.” (Spurgeon)
2. (17-24) The disciples explain what they talked about.
And He said to them, “What kind of conversation is this that you have with one another as you walk and are sad?” Then the one whose name was Cleopas answered and said to Him, “Are You the only stranger in Jerusalem, and have You not known the things which happened there in these days?” And He said to them, “What things?” So they said to Him, “The things concerning Jesus of Nazareth, who was a Prophet mighty in deed and word before God and all the people, and how the chief priests and our rulers delivered Him to be condemned to death, and crucified Him. But we were hoping that it was He who was going to redeem Israel. Indeed, besides all this, today is the third day since these things happened. Yes, and certain women of our company, who arrived at the tomb early, astonished us. When they did not find His body, they came saying that they had also seen a vision of angels who said He was alive. And certain of those who were with us went to the tomb and found it just as the women had said; but Him they did not see.”
a. What kind of conversation is this that you have with one another as you walk and are sad? Jesus opened the conversation by asking them what they had talked about. From this, we can know that Jesus had walked silently with them for a while, just listening as they carried on the conversation.
i. It was evident in their countenance (and perhaps even in their manner of walking) that they were sad. Jesus knew both what they already knew (that they were sad) and what they did not yet know (that they had no reason to be sad).
b. Are You the only stranger in Jerusalem, and have You not known the things which happened here in these days? Jesus probably smiled when they said this. He knew pretty well what had happened here in these days.
c. What things? In saying this, Jesus skillfully played along with the conversation, encouraging the men to reveal their hearts. Even though He knew their hearts, there was value in them saying it to Jesus.
d. The things concerning Jesus of Nazareth: The men explained what they did know about Jesus.
· They knew His name and where He was from.
· They knew He was a Prophet.
· They knew He was mighty in deed and word.
· They knew He was crucified.
· They knew He promised to redeem Israel.
· They knew others had said He rose from the dead.
e. We were hoping: These disciples had a hope disappointed. Their hope was not truly disappointed; but in some ways their hope was misguided (that it was He who was going to redeem Israel). Jesus would show them that their true hope was fulfilled in Him and His resurrection.
f. Just as the women had said: The only thing these disciples had to go on was the testimony of others, but they were slow to believe. The report of the women meant little to them, and the report of Peter and John who had seen the grave clothes meant little – because Him they did not see.
i. Jesus wanted to know from them what He wants to know from us today: can we believe without seeing with our own eyes? We can believe and must believe based on the reliable eyewitness testimony of other people.
3. (25-27) Jesus teaches them why the Messiah had to suffer.
Then He said to them, “O foolish ones, and slow of heart to believe in all that the prophets have spoken! Ought not the Christ to have suffered these things and to enter into His glory?” And beginning at Moses and all the Prophets, He expounded to them in all the Scriptures the things concerning Himself.
a. Slow of heart to believe: Jesus told them that the problem with their belief was more in their heart than their head. We often think the main obstacles to belief are in the head, but they are actually in the heart.
b. Ought not the Christ to have suffered these things and to enter into His glory? They should have believed what all the prophets have spoken, that the Messiah would suffer first and then be received in glory.
· They were common, simple men.
· They had lost hope.
· They had lost joy – a sense of spiritual desertion.
· They had not lost desire – they still loved to talk about Jesus.
· They had not yet seen the necessity of the cross.
i. The prophets spoke in Isaiah 53:3-5: He is despised and rejected by men, a Man of sorrows and acquainted with grief. And we hid, as it were, our faces from Him; He was despised, and we did not esteem Him. Surely He has borne our griefs and carried our sorrows; yet we esteemed Him stricken, smitten by God, and afflicted. But He was wounded for our transgressions, He was bruised for our iniquities; the chastisement for our peace was upon Him, and by His stripes we are healed.
ii. Isaiah 50:5-7 is another example of what the prophets taught concerning this. The Lord GOD has opened My ear; and I was not rebellious, nor did I turn away. I gave My back to those who struck Me, and My cheeks to those who plucked out the beard; I did not hide My face from shame and spitting. For the Lord GOD will help Me; therefore I will not be disgraced; therefore I have set My face like a flint, and I know that I will not be ashamed.
iii. Daniel 9:26 shows another prophet regarding these things: The Messiah shall be cut off, but not for Himself.
iv. Zechariah 12:10 is yet another example: They will look on Me whom they pierced. Yes, they will mourn for Him as one mourns for his only son, and grieve for Him as one grieves for a firstborn.
c. And beginning at Moses and all the Prophets, He expounded to them in all the Scriptures the things concerning Himself: Jesus began to teach them what was surely one of the most spectacular Bible studies ever taught. Beginning in Moses and all the Prophets, He told them all about the Messiah.
i. “It is a sign to us that He is still the same, though He has passed into the resurrection glory, that He still goes back to the old familiar Scripture which He had learned beside His mother’s knee.” (Morrison)
ii. He told them that the Messiah was:
· The Seed of the Woman, whose heel was bruised.
· The blessing of Abraham to all nations.
· The High Priest after the order of Melchizedek.
· The Man who wrestled with Jacob.
· The Lion of the Tribe of Judah.
· The voice from the burning bush.
· The Passover Lamb.
· The Prophet greater than Moses.
· The captain of the Lord’s army to Joshua.
· The ultimate Kinsman-Redeemer mentioned in Ruth.
· The son of David who was a King greater than David.
· The suffering Savior of Psalm 22.
· The Good Shepherd of Psalm 23.
· The wisdom of Proverbs and the Lover of the Song of Solomon.
· The Savior described in the prophets and the suffering Servant of Isaiah 53.
· The Princely Messiah of Daniel who would establish a kingdom that would never end.
ii. “The Savior, who knows the Word of God perfectly, because of His intimate union with the Spirit who is its Primary Author, expounded to them in broad outline all the Scriptures that referred to Him, from the first books of the Old Testament and right through to the end.” (Geldenhuys)
iii. “We should not understand this as the selection of a number of proof-texts, but rather as showing that throughout the Old Testament a consistent divine purpose is worked out, a purpose that in the end meant and must mean the cross.” (Morris)
d. Expounded to them in all the Scriptures: This describes how Jesus taught them. The idea of expounding is to simply let the text speak for itself; exactly what a Bible teacher should do his or her best to do.
i. The ancient Greek word for expounded (diermeneuo) has the idea of sticking close to the text. In another passage when Luke used this word it is expressed with the word translated (Acts 9:36). When Jesus explained things concerning Himself in the Old Testament He didn’t use fanciful allegories or speculative ideas. He expounded, which means He stuck close to the text.
ii. “The Scripture was a familiar book to them. And what did our Lord do when He met with them? He took the book they had studied all their lives. He turned to the pages that they knew so well. He led them down by the old familiar texts.” (Morrison)
4. (28-32) Jesus is revealed to the disciples on the road to Emmaus.
Then they drew near to the village where they were going, and He indicated that He would have gone farther. But they constrained Him, saying, “Abide with us, for it is toward evening, and the day is far spent.” And He went in to stay with them. Now it came to pass, as He sat at the table with them, that He took bread, blessed and broke it, and gave it to them. Then their eyes were opened and they knew Him; and He vanished from their sight. And they said to one another, “Did not our heart burn within us while He talked with us on the road, and while He opened the Scriptures to us?”
a. He indicated that He would have gone farther: Jesus acted as if He might continue on farther, but did not want to force His company on these disciples. But they constrained Him shows that even though they didn’t know this was Jesus in their midst, they knew they wanted to spend as much time as they could with this man.
i. “It is a very strong word that, ‘they constrained him’; it is akin to the one which Jesus used when he said, ‘The kingdom of heaven suffereth violence.’ They not only invited him, but they held him, they grasped his hand, they tugged at his skirts, they said he should not go.” (Spurgeon)
b. He took bread, blessed and broke it: These men were not present at the last supper Jesus had with his twelve disciples; they knew nothing of the sacramental nature of breaking bread in theological terms.
i. “It was in no sense a sacramental meal, as we use that word sacrament in our theology. It was a frugal supper in a village home of two tired travellers, and another. Yet it was then – in the breaking of bread, and not in any vision of resurrection splendor – that they knew that their companion was the Lord.” (Morrison)
c. Then their eyes were opened and they knew Him: Though it was not what might be called a sacramental meal, there was something in it that showed them who the mysterious and wise guest was. Before their eyes were restrained (Luke 24:16); now their eyes were opened and He was known to them in the breaking of bread (Luke 24:35).
i. Morrison suggested several ways that they might have recognized Jesus in the breaking of bread:
· The way He took the place of host with “the quiet air of majesty.”
· The way He gave the blessing over the meal they would eat.
· The pierced hands that gave them the bread.
ii. “However it was, whether by word or hand, they felt irresistibly that this was He. Some little action, some dear familiar trait, told them in a flash this was the Christ.” (Morrison)
iii. Jesus may be right in front of you, walking with you and sitting down with you at every meal – and your eyes could be restrained from seeing Him. We therefore should pray that God would open our eyes to see Jesus as He is, as being with us all the time.
d. He vanished from their sight: As soon as their eyes were opened to who Jesus was, He left miraculously and they both said what was on their hearts. Their hearts burned as they heard Him speak and teach.
e. Did not our heart burn within us while He talked: Even when they didn’t know it was Jesus, even when they didn’t believe He was risen from the dead, their heart still burned because of the ministry of God’s Word and of Jesus, the Living Word of God.
i. God’s word can have this same effect on our heart, even when we don’t know that it is Jesus doing that work.
ii. Neither of them knew the other’s heart burned until Jesus left. After that, they could have a fellowship of flaming hearts together. One reason Jesus left was so that they would love one another, and minister to one another.
5. (33-35) They tell the good news.
So they rose up that very hour and returned to Jerusalem, and found the eleven and those who were with them gathered together, saying, “The Lord is risen indeed, and has appeared to Simon!” And they told about the things that had happened on the road, and how He was known to them in the breaking of bread.
a. So they rose up that very hour and returned to Jerusalem: After a seven mile walk one way, they were so excited that they went seven miles back – and probably much faster on the return. They had the passion to tell the great news of Jesus’ resurrection.
b. The Lord is risen indeed, and has appeared to Simon: They had mutual confirmation of the resurrection of Jesus. Though the risen Jesus was not physically in their midst, His resurrection had been confirmed by more than two witnesses.
C. Jesus teaches His disciples and ascends into heaven.
1. (36-43) Jesus appears to the eleven.
Now as they said these things, Jesus Himself stood in the midst of them, and said to them, “Peace to you.” But they were terrified and frightened, and supposed they had seen a spirit. And He said to them, “Why are you troubled? And why do doubts arise in your hearts? Behold My hands and My feet, that it is I Myself. Handle Me and see, for a spirit does not have flesh and bones as you see I have.” When He had said this, He showed them His hands and His feet. But while they still did not believe for joy, and marveled, He said to them, “Have you any food here?” So they gave Him a piece of a broiled fish and some honeycomb. And He took it and ate in their presence.
a. As they said these things, Jesus Himself stood in the midst of them: This seems to be the same late Sunday meeting Jesus had with the eleven described in John 20:19-25. In his Gospel, John specifically wrote that Jesus appeared to them when the doors were shut (John 20:19). It seems that Jesus suddenly and perhaps miraculously appeared to the disciples in the midst of a closed room without making an obvious entrance.
b. Peace to you: These were words with new meaning, now that Jesus had risen from the dead. Now, true peace could come between God and man and among men.
i. “About the Lord there were the air and style of one who had peace himself, and loved to communicate it to others. The tone in which he spake peace tended to create it. He was a peace-maker, and a peace-giver, and by this sign they were driven to discern their Leader.” (Spurgeon)
c. Behold My hands and My feet, that it is I Myself: Jesus first displayed His wounded hands and feet to the disciples. In this Jesus wanted to establish both His identity and His bodily existence, and that it was in a transformed state the same body He had before the cross, upon the cross, and set in the tomb.
i. It is remarkable to consider that the resurrection body of Jesus retains the wounds He received in His sufferings and crucifixion. There are many possible reasons for this.
· To exhibit the wounds to the disciples, that they would know that it was the very same Jesus.
· To be the object of eternal amazement to the angels.
· To be His ornaments, trophies of His great work for us.
· To memorialize the weapons with which He defeated death.
· To serve as advocates in His perpetual intercession for us.
· To preserve the evidence of humanity’s crime against Him.
ii. “In the apostles’ case the facts were tested to the utmost, and the truth was not admitted till it was forced upon them. I am not excusing, the unbelief of the disciples, but I claim that their witness has all the more weight in it, because it was the result of such cool investigation.” (Spurgeon)
d. Handle Me and see: Jesus wanted to assure them that He was a real, physical body, though of a different order than our own bodies. The resurrected Jesus was not a ghost or phantom.
i. “He distinctly denied that His resurrection was of His Spirit only, for He invited them to touch His hands and His feet. The evidences of a material body are abundant.” (Morgan)
ii. “The account is precisely concerned to refute the notion that Jesus only arose in spirit, or as a ghost. Rather, He arose in spirit and in body; that is, in a spiritual body.” (Pate)
e. A spirit does not have flesh and bones as you see I have: Some make much of the fact that Jesus said His body had flesh and bones and not the more normal phrasing of flesh and blood. The idea is that perhaps the resurrection body of Jesus did not have blood, and perhaps neither will ours. It is also possible that Jesus said flesh and bones because blood could not be felt, but bones can be discerned by touch.
f. They still did not believe for joy, and marveled: Curiously, for that moment joy kept them from faith. This may have been true in the sense that we may believe something to be too good to be true. Yet it is also true that God wants from us a reasoned, thought-out faith, not a giddy easy-believism. Jesus wanted them to think and believe.
i. “Then a great joy, like a tide, swept over them. And they could not believe, they were so glad. Not long ago Christ found them sleeping for sorrow (Luke 22:45), and now He found them disbelieving for joy. Do not forget, then, that joy can hinder faith. It may be as great a foe to faith as sorrow sometimes is.” (Morrison)
ii. There were several times previous to this when joy hindered faith, in the sense of something being too good to be true.
· In Genesis 45:25-26, Jacob could not believe that Joseph was alive because the news seemed to be too good.
· In Job 9:16, Job said that if God would have answered him he would not have believed it.
· In Psalm 126:1 it seemed too good to be true that God again turned Israel’s captivity.
· When Peter was set free from prison in Acts 12, the church didn’t believe it (Acts 12:13-14).
iii. “Their joy was so great that for a moment it was even an impediment to their faith.” (Geldenhuys)
g. Have you any food here? To demonstrate both His identity and the reality of His spiritual body, Jesus ate in their presence. In most of Jesus’ resurrection appearances, He eats with the disciples.
i. This would be another powerful evidence that this was the same Jesus, doing something with them that He did many times before.
2. (44-48) Jesus teaches His disciples.
Then He said to them, “These are the words which I spoke to you while I was still with you, that all things must be fulfilled which were written in the Law of Moses and the Prophets and the Psalms concerning Me.” And He opened their understanding, that they might comprehend the Scriptures. Then He said to them, “Thus it is written, and thus it was necessary for the Christ to suffer and to rise from the dead the third day, and that repentance and remission of sins should be preached in His name to all nations, beginning at Jerusalem. And you are witnesses of these things.”
a. These are the words which I spoke to you while I was still with you: Jesus almost said, “I told you so” by reminding them that all had happened just as He said it would. To help His disciples take it all in, He opened their understanding, that they might comprehend the Scriptures.
i. It must have been before this that the disciples were actually born again by God’s Spirit, when Jesus breathed on them and they received the Holy Spirit (John 20:22).
ii. “In that one hour, in the upper chamber with Christ, Scripture became a new book to the disciples. Never forget how earnestly and constantly our Lord appealed to the testimony of the Word.” (Morrison)
b. It was necessary for the Christ to suffer and to rise from the dead the third day: Jesus wanted them to understand that the cross was not some unfortunate obstacle that had to be hurdled. It was a necessary part of God’s redemptive plan for man, and that it would be in the name of a crucified and risen Savior that repentance and remission of sins will be brought to the world.
i. “They were told by their great Master what to preach, and where to preach it, and how to preach it, and even where to begin to preach it.” (Spurgeon)
ii. Should be preached in His name: To preach the gospel in Jesus’ name means to:
· Preach it under His orders.
· Preach it on His authority.
· Preach it knowing repentance and remission of sin come by the virtue of His name.
· Refusing to preach it in our own name.
c. You are witnesses of these things: Jesus solemnly told them that they were witnesses of these things. Not only witnesses of the events surrounding the work of Jesus, but also of the commission itself to spread the gospel. This was a work they were all mutually responsible for.
d. Beginning at Jerusalem: Their work was to begin at Jerusalem; there are many reasons why it was fitting for the preaching of the gospel to begin there.
· Because the Scriptures say it should be so (Isaiah 2:3, Joel 2:32).
· Because that is where the facts of the gospel took place, and the truth of those facts should be tested straightaway.
· To honor the Jewish people and to bring them the gospel first.
· Because it is good to begin where we are tempted not to begin.
· Because the time is short and it is good to begin near to where we are.
· Because it is good to begin where we may expect opposition.
3. (49-53) The Ascension of Jesus.
“Behold, I send the Promise of My Father upon you; but tarry in the city of Jerusalem until you are endued with power from on high.” And He led them out as far as Bethany, and He lifted up His hands and blessed them. Now it came to pass, while He blessed them, that He was parted from them and carried up into heaven. And they worshiped Him, and returned to Jerusalem with great joy, and were continually in the temple praising and blessing God. Amen.
a. I send the Promise of My Father upon you: They could not do the work Jesus had called them to do unless they were endued with power from on high, and that power would come as the Holy Spirit was poured out upon them.
b. He lifted up His hands and blessed them… while He blessed them: Jesus continued to appear to His people for 40 days following His resurrection. Eventually came the day when He would ascend to heaven. When He did, Jesus left the earth blessing His Church, and He continues to bless them, as much as His people will receive.
i. Nothing but blessing had ever come from those hands; but now, Jesus stands as the High Priest over His people to bless them. “Thus He remains until He comes again, His hands uplifted, and His lips pronouncing the blessedness of His own.” (Morgan)
ii. When Jesus blesses His people, it isn’t just a pious wish like “I hope things work out for you” or “I hope you will be feeling better.” Instead, the blessing of Jesus has inherent power within it.
iii. “If he has blessed you, you shall be blessed, for there is no power in heaven, or earth, or hell, that can reverse the blessing which He gives.” (Spurgeon)
iv. “While we see those uplifted hands, there can be no room for doubt or fear, when other menacing hands are stretched out to harm us or vex us. Whether in life or death, in adversity or prosperity, in sorrow or in joy, we know by that token that we are safe.” (Morgan)
d. He was parted from them and carried up into heaven: Jesus had to ascend so that confidence would be put in the power and ministry of the Holy Spirit, not in the geographical presence of Jesus.
i. Acts 1:3 tells us that this ascension into heaven happened 40 days after Jesus’ resurrection. He spent those 40 days proving the truth of His resurrection and preparing His disciples for His departure.
ii. “He rises by his own power and majesty; he needs no help….He proved the innate power of his Deity, by which he could depart out of the world just when he willed, breaking the law of gravitation, and suspending the laws usually governing matter.” (Spurgeon)
iii. “It was unthinkable that the appearances of Jesus should grow fewer and fewer until finally they petered out. That would have effectively wrecked the faith of men.” (Barclay)
iv. “The ascension differs radically from Jesus’ vanishing from the sight of the disciples at Emmaus and similar happenings. There is an air of finality about it. It is the decisive close of one chapter and the beginning of another.” (Morris)
e. And they worshiped Him, and returned to Jerusalem with great joy, and were continually in the temple praising and blessing God: This shows the wonderful result of the ministry of Jesus in the disciples’ lives.
· They worshipped Him: This means they knew that Jesus was God, and they gave Him the honor He deserves.
· They returned to Jerusalem: This means they did just what Jesus told them to do. They were obedient.
· With great joy: This means they really believed Jesus rose from the dead, and let the joy of that fact touch everything in their life.
· Continually in the temple praising and blessing God: This means that they lived as public followers of Jesus, and could not hide their love and worship towards Him.
i. “A little before, they could not believe for joy. Now they were joyful just because they believed.” (Morrison)
Cookie and Privacy Settings
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Check to enable permanent hiding of message bar and refuse all cookies if you do not opt in. We need 2 cookies to store this setting. Otherwise you will be prompted again when opening a new browser window or new a tab.
Click to enable/disable essential site cookies.
Google Analytics Cookies
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
Click to enable/disable Google Analytics tracking.
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Click to enable/disable Google Webfonts.
Google Map Settings:
Click to enable/disable Google Maps.
Google reCaptcha Settings:
Click to enable/disable Google reCaptcha.
Vimeo and Youtube video embeds:
Click to enable/disable video embeds.
Other cookies
The following cookies are also needed - You can choose if you want to allow them:
Click to enable/disable _ga - Google Analytics Cookie.
Click to enable/disable _gid - Google Analytics Cookie.
Click to enable/disable _gat_* - Google Analytics Cookie.
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page. | Audio for Luke 24:
A. The resurrection of Jesus is discovered.
1. (1-3) Women followers of Jesus discover the empty tomb of Jesus.
Now on the first day of the week, very early in the morning, they, and certain other women with them, came to the tomb bringing the spices which they had prepared. But they found the stone rolled away from the tomb. Then they went in and did not find the body of the Lord Jesus.
a. Now on the first day of the week, very early in the morning: Jesus was crucified on Friday (or on Thursday by some accounts). After His entombment, the tomb was sealed and guarded by Roman soldiers (Matthew 27:62-66). The tomb stayed sealed and guarded until discovered by these women on the first day of the week, very early in the morning.
i. A rich man like Joseph of Arimethea would likely have a tomb carved into solid rock; this tomb was in a garden near the place of crucifixion (John 19:41). The tomb would have a small entrance and perhaps one or more compartments where bodies were laid out after being wrapped with linen strips smeared with spices, aloes, and ointments. Customarily, the Jews left these bodies alone for a few years until they decayed down to the bones, then the bones were placed in a small stone box known as an ossuary. The ossuary remained in the tomb with the remains of other family members.
ii. The entrance to the tomb was blocked by a heavy circular shaped stone, securely rolled in a channel, so only several strong men could move it. This was done to ensure that no one would disturb the remains.
iii. John 19:42 specifically tells us that the tomb of Joseph of Arimethea that Jesus was laid in was close to the place of Jesus’ crucifixion (and each of the two suggested places for Jesus’ death and resurrection bear this out). | yes |
Archaeology | Was the tomb of Jesus discovered? | no_statement | the "tomb" of jesus was not "discovered".. there is no evidence to suggest that the "tomb" of jesus was "discovered".. the discovery of the "tomb" of jesus has not been confirmed. | https://www.desiringgod.org/articles/historical-evidence-for-the-resurrection | Historical Evidence for the Resurrection | Desiring God | The historical evidence for the resurrection of Christ is very good. Scholars such as William Lane Craig, J.P. Moreland, Gary Habermas, and others have done an especially good job of detailing that evidence.1 It is the aim of this article to offer a sort of synthesis of some of their key points and show the strength of the historical evidence for the resurrection of Christ.
A method commonly used today to determine the historicity of an event is "inference to the best explanation." William Lane Craig describes this as an approach where we "begin with the evidence available to us and then infer what would, if true, provide the best explanation of that evidence." In other words, we ought to accept an event as historical if it gives the best explanation for the evidence surrounding it.
When we look at the evidence, the truth of the resurrection emerges very clearly as the best explanation. There is no other theory that even come close to accounting for the evidence. Therefore, there is solid historical grounds for the truth that Jesus Christ rose from the dead.
It is worth pointing out that in establishing the historicity of the resurrection, we do not need to assume that the New Testament is inspired by God or even trustworthy. While I do believe these things, we are going to focus here on three truths that even critical scholars admit. In other words, these three truths are so strong that they are accepted by serious historians of all stripes. Therefore, any theory must be able to adequately account for these data.
The three truths are:
The tomb in which Jesus was buried was discovered empty by a group of women on the Sunday following the crucifixion.
Jesus' disciples had real experiences with one whom they believed was the risen Christ.
As a result of the preaching of these disciples, which had the resurrection at its center, the Christian church was established and grew.
Virtually all scholars who deal with the resurrection, whatever their school of thought, assent to these three truths. We will see that the resurrection of Christ is the best explanation for each of them individually. But then we will see, even more significantly, that when these facts are taken together we have an even more powerful case for the resurrection--because the skeptic will not have to explain away just one historical fact, but three. These three truths create a strongly woven, three chord rope that cannot be broken.
The Empty Tomb
To begin, what is the evidence that the tomb in which Jesus was buried was discovered empty by a group of women on the Sunday following the crucifixion?
First, the resurrection was preached in the same city where Jesus had been buried shortly before. Jesus' disciples did not go to some obscure place where no one had heard of Jesus to begin preaching about the resurrection, but instead began preaching in Jerusalem, the very city where Jesus had died and been buried. They could not have done this if Jesus was still in his tomb--no one would have believed them. No one would be foolish enough to believe a man had raised from the dead when his body lay dead in the tomb for all to see. As Paul Althaus writes, the resurrection proclamation "could not have been maintained in Jerusalem for a single day, for a single hour, if the emptiness of the tomb had not been established as a fact for all concerned."
Second, the earliest Jewish arguments against Christianity admit the empty tomb. In Matthew 28:11-15, there is a reference made to the Jew's attempt to refute Christianity be saying that the disciples stole the body. This is significant because it shows that the Jews did not deny the empty tomb. Instead, their "stolen body" theory admitted the significant truth that the tomb was in fact empty. The Toledoth Jesu, a compilation of early Jewish writings, is another source acknowledging this. It acknowledges that the tomb was empty, and attempts to explain it away. Further, we have a record of a second century debate between a Christian and a Jew, in which a reference is made to the fact that the Jews claim the body was stolen. So it is pretty well established that the early Jews admitted the empty tomb.
Why is this important? Remember that the Jewish leaders were opposed to Christianity. They were hostile witnesses. In acknowledging the empty tomb, they were admitting the reality of a fact that was certainly not in their favor. So why would they admit that the tomb was empty unless the evidence was too strong to be denied? Dr. Paul Maier calls this "positive evidence from a hostile source. In essence, if a source admits a fact that is decidedly not in its favor, the fact is genuine."
Third, the empty tomb account in the gospel of Mark is based upon a source that originated within seven years of the event it narrates. This places the evidence for the empty tomb too early to be legendary, and makes it much more likely that it is accurate. What is the evidence for this? I will list two pieces. A German commentator on Mark, Rudolf Pesch, points out that this pre-Markan source never mentions the high priest by name. "This implies that Caiaphas, who we know was high priest at that time, was still high priest when the story began circulating." For "if it had been written after Caiaphas' term of office, his name would have had to have been used to distinguish him from the next high priest. But since Caiaphas was high priest from A.D. 18 to 37, this story began circulating no later than A.D. 37, within the first seven years after the events," as Michael Horton has summarized it. Furthermore, Pesch argues "that since Paul's traditions concerning the Last Supper [written in 56] (1 Cor 11) presuppose the Markan account, that implies that the Markan source goes right back to the early years" of Christianity (Craig). So the early source Mark used puts the testimony of the empty tomb too early to be legendary.
Fourth, the empty tomb is supported by the historical reliability of the burial story. NT scholars agree that he burial story is one of the best established facts about Jesus. One reason for this is because of the inclusion of Joseph of Arimethea as the one who buried Christ. Joseph was a member of the Jewish Sanhedrein, a sort of Jewish supreme court. People on this ruling class were simply too well known for fictitious stories about them to be pulled off in this way. This would have exposed the Christians as frauds. So they couldn't have circulated a story about him burying Jesus unless it was true. Also, if the burial account was legendary, one would expect to find conflicting traditions--which we don't have.
But how does the reliability of Jesus' burial argue that the tomb was empty? Because the burial account and empty tomb account have grammatical and linguistic ties, indicating that they are one continuous account. Therefore, if the burial account is accurate the empty tomb is likely to be accurate as well. Further, if the burial account is accurate then everyone knew where Jesus was buried. This would have been decisive evidence to refute the early Christians who were preaching the resurrection--for if the tomb had not been empty, it would have been evident to all and the disciples would have been exposed as frauds at worst, or insane at best.
Fifth, Jesus' tomb was never venerated as a shrine. This is striking because it was the 1st century custom to set up a shrine at the site of a holy man's bones. There were at least 50 such cites in Jesus' day. Since there was no such shrine for Jesus, it suggests that his bones weren't there.
Sixth, Mark's account of the empty tomb is simple and shows no signs of legendary development. This is very apparent when we compare it with the gospel of Peter, a forgery from about 125. This legend has all of the Jewish leaders, Roman guards, and many people from the countryside gathered to watch the resurrection. Then three men come out of the tomb, with their heads reaching up to the clouds. Then a talking cross comes out of the tomb! This is what legend looks like, and we see none of that in Mark's account of the empty tomb--or anywhere else in the gospels for that matter!
Seventh, the tomb was discovered empty by women. Why is this important? Because the testimony of women in 1st century Jewish culture was considered worthless. As Craig says, "if the empty tomb story were a legend, then it is most likely that the male disciples would have been made the first to discover the empty tomb. The fact that despised women, whose testimony was deemed worthless, were the chief witnesses to the fact of the empty tomb can only be plausibly explained if, like it or not, they actually were the discoverers of the empty tomb."
Because of the strong evidence for the empty tomb, most recent scholars do not deny it. D.H. Van Daalen has said, "It is extremely difficult to object to the empty tomb on historical grounds; those who deny it do so on the basis of theological or philosophical assumptions." Jacob Kremer, who has specialized in the study of the resurrection and is a NT critic, has said "By far most exegetes hold firmly to the reliability of the biblical statements about the empty tomb" and he lists twenty-eight scholars to back up his fantastic claim.
I'm sure you've heard of the various theories used to explain away the empty tomb, such as that the body was stolen. But those theories are laughed at today by all serious scholars. In fact, they have been considered dead and refuted for almost a hundred years. For example, the Jews or Romans had no motive to steal the body--they wanted to suppress Christianity, not encourage it by providing it with an empty tomb. The disciples would have had no motive, either. Because of their preaching on the resurrection, they were beaten, killed, and persecuted. Why would they go through all of this for a deliberate lie? No serious scholars hold to any of these theories today. What explanation, then, do the critics offer, you may ask? Craig tells us that "they are self-confessedly without any explanation to offer. There is simply no plausible natural explanation today to account for Jesus' tomb being empty. If we deny the resurrection of Jesus, we are left with an inexplicable mystery." The resurrection of Jesus is not just the best explanation for the empty tomb, it is the only explanation in town!
The Resurrection Appearances
Next, there is the evidence that Jesus' disciples had real experiences with one whom they believed was the risen Christ. This is not commonly disputed today because we have the testimony of the original disciples themselves that they saw Jesus alive again. And you don't need to believe in the reliability of the gospels to believe this. In 1 Corinthians 15:3-8, Paul records an ancient creed concerning Jesus' death, burial, and resurrection appearances that is much earlier than the letter in which Paul is recording it:
For I delivered to you as of first importance what I also received, that Christ died for our sins according to the Scriptures, and that He was buried, and that He was raised on the third day according to the Scriptures, and that He appeared to Cephas, then to the twelve. After that He appeared to more than five hundred brethren at one time...
It is generally agreed by critical scholars that Paul receive this creed from Peter and James between 3-5 years after the crucifixion. Now, Peter and James are listed in this creed as having seen the risen Christ. Since they are the ones who gave this creed to Paul, this is therefore a statement of their own testimony. As the Jewish Scholar Pinchahs Lapide has said, this creed "may be considered the statement of eyewitnesses."
Now, I recognize that just because the disciples think they saw Jesus doesn't automatically mean that they really did. There are three possible alternatives:
They were lying
They hallucinated
They really saw the risen Christ
Which of these is most likely? Were they lying? On this view, the disciples knew that Jesus had not really risen, but they made up this story about the resurrection. But then why did 10 of the disciples willingly die as martyrs for their belief in the resurrection? People will often die for a lie that they believe is the truth. But if Jesus did not rise, the disciples knew it. Thus, they wouldn't have just been dying for a lie that they mistakenly believed was true. They would have been dying for a lie that they knew was a lie. Ten people would not all give their lives for something they know to be a lie. Furthermore, after witnessing events such as Watergate, can we reasonably believe that the disciples could have covered up such a lie?
Because of the absurdity of the theory that the disciples were lying, we can see why almost all scholars today admit that, if nothing else, the disciples at least believed that Jesus appeared to them. But we know that just believing something to be true doesn't make it true. Perhaps the disciples were wrong and had been deceived by a hallucination?
The hallucination theory is untenable because it cannot explain the physical nature of the appearances. The disciples record eating and drinking with Jesus, as well as touching him. This cannot be done with hallucinations. Second, it is highly unlikely that they would all have had the same hallucination. Hallucinations are highly individual, and not group projections. Imagine if I came in here and said to you, "wasn't that a great dream I had last night?" Hallucinations, like dreams, generally don't transfer like that. Further, the hallucination theory cannot explain the conversion of Paul, three years later. Was Paul, the persecutor of Christians, so hoping to see the resurrected Jesus that his mind invented an appearance as well? And perhaps most significantly, the hallucination theory cannot even deal with the evidence for the empty tomb.
Since the disciples could not have been lying or hallucinating, we have only one possible explanation left: the disciples believed that they had seen the risen Jesus because they really had seen the risen Jesus. So, the resurrection appearances alone demonstrate the resurrection. Thus, if we reject the resurrection, we are left with a second inexplicable mystery--first the empty tomb and now the appearances.
The Origin of the Christian Faith
Finally, the existence of the Christian church is strong proof for the resurrection. Why is this? Because even the most skeptical NT scholars admit that the disciples at least believed that Jesus was raised from the grave. But how can we explain the origin of that belief? William Lane Craig points out that there are three possible causes: Christian influences, pagan influences, or Jewish influences.
Could it have been Christian influences? Craig writes, "Since the belief in the resurrection was itself the foundation for Christianity, it cannot be explained as the later product of Christianity." Further, as we saw, if the disciples made it up, then they were frauds and liars--alternatives we have shown to be false. We have also shown the unlikeliness that they hallucinated this belief.
But what about pagan influences? Isn't it often pointed out that there were many myths of dying and rising savior gods at the time of Christianity? Couldn't the disciples have been deluded by those myths and copied them into their own teaching on the resurrection of Christ? In reality, serious scholars have almost universally rejected this theory since WWII, for several reasons. First, it has been shown that these mystery religions had no major influence in Palestine in the 1st century. Second, most of the sources which contain parallels originated after Christianity was established. Third, most of the similarities are often apparent and not real--a result of sloppy terminology on the part of those who explain them. For example, one critic tried to argue that a ceremony of killing a bull and letting the blood drip all over the participants was parallel to holy communion. Fourth, the early disciples were Jews, and it would have been unthinkable for a Jew to borrow from another religion. For they were zealous in their belief that the pagan religions were abhorrent to God.
Jewish influences cannot explain the belief in the resurrection, either. 1st century Judaism had no conception of a single individual rising from the dead in the middle of history. Their concept was always that everybody would be raised together at the end of time. So the idea of one individual rising in the middle of history was foreign to them. Thus, Judaism of that day could have never produced the resurrection hypothesis. This is also another good argument against the theory that the disciples were hallucinating. Psychologists will tell you that hallucinations cannot contain anything new--that is, they cannot contain any idea that isn't already somehow in your mind. Since the early disciples were Jews, they had no conception of the messiah rising from the dead in the middle of history. Thus, they would have never hallucinated about a resurrection of Christ. At best, they would have hallucinated that he had been transported directly to heaven, as Elijah had been in the OT, but they would have never hallucinated a resurrection.
So we see that if the resurrection did not happen, there is no plausible way to account for the origin of the Christian faith. We would be left with a third inexplicable mystery.
Three Independent Facts
These are three independently established facts that we have established. If we deny the resurrection, we are left with at least three inexplicable mysteries. But there is a much, much better explanation than a wimpy appeal to mystery or a far-fetched appeal to a stolen body, hallucination, and mystery religion. The best explanation is that Christ in fact rose from the dead! Even if we take each fact by itself, we have good enough evidence. But taken together, we see that the evidence becomes even stronger. For example, even if two of these facts were to be explained away, there would still be the third truth to establishes the fact of the resurrection.
These three independently established facts also make alternative explanations less plausible. It is generally agreed that the explanation with the best explanatory scope should be accepted. That is, the theory that explains the most of the evidence is more likely to be true. The resurrection is the only hypothesis that explains all of the evidence. If we deny the resurrection, we must come up with three independent natural explanations, not just one. For example, you would have to propose that the Jews stole the body, then the disciples hallucinated, and then somehow the pagan mystery religions influenced their beliefs to make them think of a resurrection. But we have already seen the implausibility of such theories. And trying to combine them will only make matters worse. As Gary Habermas has said, "Combining three improbable theories will not produce a probable explanation. It will actually increase the degree of improbability. Its like putting leaking buckets inside each other, hoping each one will help stop up the leaks in the others. All you will get is a watery mess."
Legend?
Before examining, briefly, the implications of the resurrection, I wish to take a quick look at perhaps the most popular theory today against the resurrection--that it was a legend that developed over time. The facts we have established so far are enough to put to rest any idea of a legend.
First, we have seen that the testimony of the resurrection goes back to the original experiences. Remember the eyewitness creed of 1 Corinthians 15:3-5? That is the first-hand testimony of Peter and James. So it is not the case that the resurrection belief evolved over time. Instead, we have testimony from the very people who claimed to have experienced it. Second, how can the myth theory explain the evidence for the empty tomb? Third, the myth theory cannot explain the origin of the Christian faith--for we have already seen that the real resurrection of Christ is the only adequate cause for the resurrection belief. Fourth, the myth theory cannot explain the conversion of Paul. Would he be convinced by a myth? His conversion was in fact too early for any myth to have developed by then. How then can we explain his conversion? Do we dare accuse him of lying when he said he saw the risen Christ?
Fifth, we have seen the evidence that the empty tomb story in Mark was very early--within seven years of the events. That is not long enough for legends. Sixth, we have seen that the empty tomb narrative lacks the classic traits of legendary development. Seventh, critical scholars agree that the resurrection message was the foundation of the preaching of the early church. Thus, it could not have been the product of the later church. Ninth, there is very good evidence that the gospels and Acts were written very early. For example, the book of Acts never records the death of Paul, which occurred in about 64, or the destruction of Jerusalem, which occurred in 70.
Since both Jerusalem and Paul are key players in the book of Acts, it seems strange that their demises would be omitted. The best explanation seems to be that Paul's death and Jerusalem's destruction are omitted because the book of Acts had been completed before they happened. This means that Acts was written before 64, when Paul died. Since Acts is volume 2 of Luke's writings, the book of Luke being the first, then the Gospel of Luke was even earlier, perhaps 62. And since most scholars agree that Mark was the first gospel written, that gospel would have been composed even earlier, perhaps in the late 50s. This brings us within twenty years of the events, which is not enough time for legends to develop. So the legend theory is not very plausible.
On the basis of the evidence we have seen, it appears to me that the resurrection is the best explanation. It explains the empty tomb, the resurrection appearances, and the existence of the Christian church. No other competing theory can explain all three of these facts. In fact, none of these competing theories can even give a satisfying explanation for even one of these facts. So it seems like the rational person will accept that Jesus Christ rose from the dead.
The Importance of the Resurrection
But, in conclusion, don't we have to ask ourselves what implications this has? Why does it matter? Or is this some dry, dusty old piece of history that has no relevance to our lives? I believe that the resurrection is the most important truth in the world. It has far reaching implications on our lives.
First, the resurrection proves that the claims Jesus made about himself are true. What did Jesus claim? He claimed to be God. One might say, "I don't believe that He claimed to be God, because I don't believe the Bible." But the fact is that even if we take only the passages which skeptical scholars admit as authentic, it can still be shown that Jesus claimed to be God. I have written a paper elsewhere to demonstrate this. So it is impossible to get around the fact that Jesus claimed to be God. Now, if Jesus had stayed dead in the tomb, it would be foolish to believe this claim. But since He rose from the dead, it would be foolish not to believe it. The resurrection proves that what Jesus said about Himself is true--He is fully God and fully man.
Second, have you ever wondered what reasons there are to believe in the Bible? Is there good reason to believe that it was inspired by God, or is it simply a bunch of interesting myths and legends? The resurrection of Jesus answers the question. If Jesus rose from the dead, then we have seen this validates His claim to be God. If He is God, He speaks with absolute certainty and final authority. Therefore, what Jesus said about the Bible must be true. Surely you are going to accept the testimony of one who rose from the dead over the testimony of a skeptical scholar who will one day die himself--without being able to raise himself on the third day. What did Jesus say about the Bible? He said that it was inspired by God and that it cannot error. I will accept the testimony of Jesus over what I would like to be true and over the opinions of other men and women. Therefore I believe that the Bible is inspired by God, without error. Don't get misled by the numerous skeptical and unbelieving theories about the Bible. Trust Jesus--He rose from the dead.
Third, many people are confused by the many different religions in the world. Are they all from God? But on a closer examination we see that they cannot all be from God, because they all contradict each other. They cannot all be true any more than 2+2 can equal both 4 and 5 at the same time. For example, Christianity is the only religion that believes Jesus Christ is both God and man. All other religions say that he was a good man only-and not God. Clearly, both claims cannot be right! Somebody is wrong. How are we to know which religion is correct? By a simple test: which religion gives the best evidence for its truth? In light of Christ's resurrection, I think that Christianity has the best reasons behind it.
Jesus is the only religious leader who has risen from the dead. All other religious leaders are still in their tombs. Who would you believe? I think the answer is clear: Jesus' resurrection demonstrates that what He said was true. Therefore, we must accept his statement to be the only way to God: "I am the way, the truth, and the life; no one comes to the Father, except through me" (John 14:6).
Fourth, the resurrection of Christ proves that God will judge the world one day. The apostle Paul said, "God is now declaring to men that all everywhere should repent, because He has fixed a day in which He will judge the world in righteousness through a Man whom He has appointed, having furnished proof to all men by raising Him from the dead." The resurrection of Christ proves something very personal and significant to each of us--we will have to give an account of ourselves to a holy God. And if we are honest with ourselves, we will have to admit that we do not measure up to his standard. We are sinful, and therefore deserve to be condemned at His judgment.
Which leads to our fifth point. The resurrection of Christ provides genuine hope for eternal life. Why? Because Jesus says that by trusting in Him, we will be forgiven of our sins and thereby escape being condemned at the judgment. The NT doesn't just tell us that Christ rose from the dead and leave us wondering why He did this. It answers that He did this because we are sinners. And because we have sinned, we are deserving of God's judgment. Since God is just, He cannot simply let our sins go. The penalty for our sins must be paid.
The good news is that God, out of His love, became man in Jesus Christ in order to pay the penalty for sinners. On the cross, Jesus died in the place of those who would come to believe in Him. He took upon Himself the very death that we deserve. The apostle Paul says "He was delivered up because of our sins." But the apostle Paul goes on to say "He was raised to life because of our justification." Paul is saying that Christ's resurrection proves that His mission to conquer sin was successful. His resurrection proves that He is a Savior who is not only willing, but also able, to deliver us from the wrath of God that is coming on the day of judgment. The forgiveness that Jesus died and rose to provide is given to those who trust in Him for salvation and a happy future.
Let me close with the sixth reason the resurrection is significant. The Bible says that Christ's resurrection is the pattern that those who believe in Him will follow. In other words, those who believe in Christ will one day be resurrected by God just as He was. The resurrection proves that those who trust in Christ will not be subject in eternity to a half-human existence in just their souls. It proves that our bodies will be resurrected one day. Because of the resurrection of Christ, believers will one day experience, forever, the freedom of having a glorified soul and body.
See William Lane Craig's Reasonable Faith and The Son Rises, J.P. Moreland's Scaling the Secular City, and Gary Habermas' The Case for the Resurrection of Jesus and Did Jesus Rise from the Dead?, a debate with then-atheist Anthony Flew. ↩ | These three truths create a strongly woven, three chord rope that cannot be broken.
The Empty Tomb
To begin, what is the evidence that the tomb in which Jesus was buried was discovered empty by a group of women on the Sunday following the crucifixion?
First, the resurrection was preached in the same city where Jesus had been buried shortly before. Jesus' disciples did not go to some obscure place where no one had heard of Jesus to begin preaching about the resurrection, but instead began preaching in Jerusalem, the very city where Jesus had died and been buried. They could not have done this if Jesus was still in his tomb--no one would have believed them. No one would be foolish enough to believe a man had raised from the dead when his body lay dead in the tomb for all to see. As Paul Althaus writes, the resurrection proclamation "could not have been maintained in Jerusalem for a single day, for a single hour, if the emptiness of the tomb had not been established as a fact for all concerned. "
Second, the earliest Jewish arguments against Christianity admit the empty tomb. In Matthew 28:11-15, there is a reference made to the Jew's attempt to refute Christianity be saying that the disciples stole the body. This is significant because it shows that the Jews did not deny the empty tomb. Instead, their "stolen body" theory admitted the significant truth that the tomb was in fact empty. The Toledoth Jesu, a compilation of early Jewish writings, is another source acknowledging this. It acknowledges that the tomb was empty, and attempts to explain it away. Further, we have a record of a second century debate between a Christian and a Jew, in which a reference is made to the fact that the Jews claim the body was stolen. So it is pretty well established that the early Jews admitted the empty tomb.
Why is this important? Remember that the Jewish leaders were opposed to Christianity. They were hostile witnesses. In acknowledging the empty tomb, they were admitting the reality of a fact that was certainly not in their favor. So why would they admit that the tomb was empty unless the evidence was too strong to be denied? Dr. Paul Maier calls this "positive evidence from a hostile source. | yes |
Archaeology | Was the tomb of Jesus discovered? | no_statement | the "tomb" of jesus was not "discovered".. there is no evidence to suggest that the "tomb" of jesus was "discovered".. the discovery of the "tomb" of jesus has not been confirmed. | https://enduringword.com/bible-commentary/john-20/ | Enduring Word Bible Commentary John Chapter 20 | A. Discovery of the empty tomb
Now on the first day of the week Mary Magdalene went to the tomb early, while it was still dark, and saw that the stone had been taken away from the tomb. Then she ran and came to Simon Peter, and to the other disciple, whom Jesus loved, and said to them, “They have taken away the Lord out of the tomb, and we do not know where they have laid Him.”
a. Now on the first day of the week Mary Magdalene went to the tomb early: Jesus was crucified on Friday (or on Thursday by some accounts). After His entombment, the tomb was sealed and guarded by Roman soldiers (Matthew 27:62-66). The tomb stayed sealed and guarded until discovered on the first day of the week… early, while it was still dark.
b. Mary Magdalene… she ran and came to Simon Peter: Other gospels explain she was not the only woman to come to the tomb that morning (at least three other women accompanied her). Mary was the one who ran back and told the disciples about the empty tomb, so John mentions her.
i. Jesus had cast seven demons out of this Mary (Luke 8:2, Mark 16:9). Her troubled past didn’t disqualify her from being the first witness of the resurrected Jesus and His first commissioned messenger of His resurrection.
ii. The women came to complete the work begun by Joseph and Nicodemus. “Probably, in view of the lateness of the hour and the nearness of the sabbath, Nicodemus was not able to use all the spices he had brought in the way intended.” (Morris)
c. They have taken away the Lord out of the tomb: When she saw the empty tomb, Mary’s first reaction was to think the body of Jesus was stolen. She wasn’t wishing for or anticipating the resurrection of Jesus, and she certainly did not imagine it out of hope.
i. We do not know where: “The plural may naturally be accepted as confirming Mark’s account that she was not alone.” (Dods)
2. (3-4) Peter and John run to the tomb.
Peter therefore went out, and the other disciple, and were going to the tomb. So they both ran together, and the other disciple outran Peter and came to the tomb first.
a. Peter therefore went out, and the other disciple: Peter and John heard the news from Mary and immediately started for the tomb. In keeping with the author’s humility, John did not refer to himself directly, but only as the other disciple.
b. They both ran together, and the other disciple outran Peter and came to the tomb first: John was humble enough to avoid the mention of his own name, but competitive enough to tell us that he outran Peter to the tomb.
i. By tradition, Peter was older than John. We might picture a man in his late forties or early fifties like Peter running to the tomb with great labor, and a man and his mid-twenties easily outrunning him.
ii. This shows that they both ran hard. Peter and John had just heard life-changing news: that the tomb was empty. They couldn’t be indifferent or detached to this news; they had to see for themselves.
3. (5-10) Peter and John examine the empty tomb.
And he, stooping down and looking in, saw the linen cloths lying there; yet he did not go in. Then Simon Peter came, following him, and went into the tomb; and he saw the linen cloths lying there, and the handkerchief that had been around His head, not lying with the linen cloths, but folded together in a place by itself. Then the other disciple, who came to the tomb first, went in also; and he saw and believed. For as yet they did not know the Scripture, that He must rise again from the dead. Then the disciples went away again to their own homes.
a. Stooping down and looking in: Arriving first at the tomb, John was looking in (the ancient Greek word blepei meaning “to clearly see a material object”), and he saw the grave wrappings of Jesus still in the tomb (saw the linen cloths lying there). John clearly saw this, and there was no mistake about what he saw.
i. Yet he did not go in: Something kept John from actually going into the tomb. “Having seen that the graveclothes were still within, the other disciple probably concluded that the body was also there and so refrained from entering. Either he felt that he should not enter the tomb out of respect for the dead, or else he feared the ceremonial defilement of touching a corpse.” (Tenney)
ii. A typical rich man’s tomb of that time would be large enough to walk into, with a place to lay out the body on one side and a bench for mourners on the other side. The entrance might be an opening only 3 feet (1 meter) high and 2.5 feet (.75 meters) wide. It was large enough to get into, yet there was a bit of bowing and turning necessary. There was some commitment needed to go inside the tomb, and for some reason John did not go in.
b. Then Simon Peter came, following him, and went into the tomb: Whatever ever kept John from going in didn’t stop Peter. When he finally arrived he immediately went into the tomb. This action-oriented impulsiveness was characteristic of Peter. John wanted to stop and think about it but Peter went right in.
c. He saw the linen cloths lying there: Going in, Peter then saw (the ancient Greek word theorei meaning “to contemplate, observe, scrutinize”) that the cloths were still orderly and neat. It looked as if the body evaporated out of the burial wrappings without disturbing their place.
i. The phrasing of linen cloths lying there and folded together in a place by itself indicates the orderly arrangement of the burial wrappings. Prepared for burial, those strips of linen cloths were smeared with ointments and aloes and spices, and the linen cloths were applied in several layers. The burial of Jesus on the day of His death was hurried, and the women came early Sunday morning to apply more layers.
ii. The mixture of ointments and aloes and spices would dry and harden the linen cloths, making something of a mummy or a cocoon. The normal removal of these burial wrappings would require some tearing or cutting; Peter saw that it was no normal removal of the burial wrappings. “The whole point of the description is that the grave-clothes did not look as if they had been put off or taken off; they were lying there in their regular folds as if the body of Jesus had simply evaporated out of them.” (Barclay)
iii. The neat, orderly arrangement of the linen cloths showed that a human hand, at least not in any way that was immediately apparent, did not remove the burial wrappings of Jesus. All this demonstrated that something absolutely unique had happened in that now-empty tomb.
· The linen cloths were there – the body had not been removed with them.
· The linen cloths were orderly – not removed in any normal way by the person wrapped in them.
· The linen cloths were orderly – not removed by grave robbers or vandals.
iv. It has been suggested that the burial wrappings of Jesus have been preserved in the Shroud of Turin. The Shroud of Turin can probably never be positively proved to be part of the burial wrappings of Jesus. But, “The evidence thus far indicates the probable conclusions that the shroud is ancient (perhaps from the first century), that it does not contradict the NT accounts, and that the image is not a fake. It may well be the actual burial garment of Jesus.” (Evangelical Dictionary of Theology)
v. The image on the shroud is of a crucified male, bearded, 5’11” in height, weighing about 175 pounds. His physique was muscular and well built, and he is an estimated age of 30-35 years. His long hair is tied into a pigtail and there is no evidence on decomposition on the cloth. Results of the Shroud of Turin Research Project in October 1978 determined that the Shroud is not a painting or a forgery. They determined that its blood is real blood and the image seems to be some type of scorch, though they cannot account for how it was made.
vi. The Shroud of Turin is an interesting object, yet there are also reasons for skepticism.
· John described two aspects of the grave wrappings: the linen cloths and the handkerchief that had been around His head. This would imply that the head and the body of Jesus were wrapped separately, while the Shroud of Turin presents an image of an entire body on one cloth. It is possible that the Shroud was underneath those two sets of wrappings and unmentioned by John, but we can’t say that John describes a fabric such as the Shroud of Turin.
· However, Trench suggests: “The winding sheet which had been folded over all (Matthew, Mark, Luke) must have been unfolded and laid back along either side so as to leave the bandage-casing exposed.”
· We may suppose a good reason why God would not want or allow the preservation of Jesus’ burial wrappings, not wanting to leave behind a relic that would be inevitably worshipped.
vii. The handkerchief that had been around His head: “This means the headcloth still retained the shape the contour of Jesus’ head had given it and that It was still separated from the other wrappings by a space that suggested the distance between the neck of the deceased and the upper chest, where the wrappings of the body would have begun.” (Tenney)
d. The other disciple… he saw and believed: After Peter went into the tomb John also went in. He then saw (the ancient Greek word eiden meaning, “to understand, to perceive the significance of”) and then John believed. The distinctive arrangement of the burial wrappings convinced him.
i. Generally, the very first Christians did not believe in the resurrection only because the tomb was empty, but because they saw and met the resurrected Jesus. John was something of an exception; he believed simply by seeing the empty tomb, before meeting the resurrected Jesus.
ii. “He believed that Jesus was risen from the dead. He received into his mind, embraced with his assent, THE FACT OF THE RESURRECTION, for the first time. He did this, on the ocular testimony before him; for as yet neither of them knew the Scripture.” (Alford)
iii. “John believed, but Peter was still in the dark. Again the former had outrun his friend.” (Maclaren)
iv. “Some of the best books on the Resurrection have been written by lawyers, some of whom originally set out to disprove it. I am thinking of men like Frank Morrison, Gilbert West, J.N.D. Anderson, and others. Sir Edward Clark, another English jurist, once wrote: ‘As a lawyer I have made a prolonged study of the evidences for the first Easter day. To me the evidence is conclusive, and over and over again in the High Court I have secured the verdict on evidence not nearly so compelling… As a lawyer I accept it unreservedly as the testimony of men to facts that they were able to substantiate.” (Boice)
e. For as yet they did not know the Scripture, that He must rise again from the dead: At this point Peter and John were persuaded of the fact of the resurrection; they believed. Yet because they did not know the Scripture, that He must rise again from the dead, they did not understand the meaning of the resurrection.
i. Knowing the fact of the resurrection is an important start, but not enough. We need to let the Bible tell us the meaning and the importance of Jesus’ resurrection.
· The resurrection means that Jesus was declared to be the Son of God with power, according to the Spirit of holiness, by the resurrection from the dead (Romans 1:4).
· The resurrection means that we have assurance of our own resurrection: For if we believe that Jesus died and rose again, even so God will bring with Him those who sleep in Jesus (1 Thessalonians 4:14).
· The resurrection means that God has an eternal plan for these bodies of ours. “There was nothing in the teaching of Jesus approaching the Gnostic heresy that declared that the flesh is inherently evil. Plato could only get rid of sin by getting rid of the body. Jesus retains the body; and declares that God feeds the body as well as the soul, that the body is as sacred thing as the soul, since the soul makes it its sanctuary.” (Morgan)
· The resurrection means that Jesus has a continuing ministry: He is also able to save to the uttermost those who come to God through Him, since He ever lives to make intercession for them (Hebrews 7:25).
· The resurrection means that Christianity and its God are unique and completely different and unique among world religions.
· The resurrection proves that though it looked like Jesus died on the cross as a common criminal He actually died as a sinless man, out of love and self-sacrifice to bear the guilt of our sin. The death of Jesus on the cross was the payment, but the resurrection was the receipt, showing that the payment was perfect in the sight of God the Father.
B. Mary Magdalene meets the risen Jesus.
1. (11-13) Mary, stricken with grief, sees two angels in the empty tomb.
But Mary stood outside by the tomb weeping, and as she wept she stooped down and looked into the tomb. And she saw two angels in white sitting, one at the head and the other at the feet, where the body of Jesus had lain. Then they said to her, “Woman, why are you weeping?” She said to them, “Because they have taken away my Lord, and I do not know where they have laid Him.”
a. Mary stood outside the tomb weeping: Peter and John examined the evidence of the empty tomb and John was persuaded that Jesus rose from the dead, though he did not yet understand the meaning of it all. Mary did not yet have the confidence that Jesus was resurrected, so she wept.
b. As she wept she stooped down and looked into the tomb: Mary wanted to see what Peter and John saw, so she made her own examination. Yet in the moment between their examination and Mary’s, something was different in the tomb.
c.She saw two angels in white sitting: Mary didn’t notice the burial wrappings and their curious arrangement; now there were two angels in the tomb. Mary didn’t seem to react with shock or fear; she probably did not immediately perceive that they were angels (Hebrews 13:2).
i. “The presence of angels was a trifle to Mary, who had only one thought – the absence of her Lord.” (Maclaren)
ii. “Sent for her sake, and the rest, to certify them of the resurrection. It is their office (and they are glad of it) to comfort and counsel the saints still, as it were by speaking and doing after a spiritual manner.” (Trapp)
iii. One at the head and the other at the feet: “So were the cherubim placed at each end of the mercy-seat: Exodus 25:18, 19.” (Clarke)
d. They have taken away my Lord, and I do not know where they have laid Him: Mary wasn’t thinking or dreaming that Jesus was alive. She believed He was still dead, and only wanted to know where He was so she could do the final work of preparing His body for burial. This is more evidence that she didn’t notice the burial cloths because of the angels.
2. (14-16) Mary meets Jesus.
Now when she had said this, she turned around and saw Jesus standing there, and did not know that it was Jesus. Jesus said to her, “Woman, why are you weeping? Whom are you seeking?” She, supposing Him to be the gardener, said to Him, “Sir, if You have carried Him away, tell me where You have laid Him, and I will take Him away.” Jesus said to her, “Mary!” She turned and said to Him, “Rabboni!” (which is to say, Teacher).
a. She turned around and saw Jesus standing there: Mary wondered and worried about where Jesus was, but He wasn’t far away.
i. “Perhaps Mary withdrew abruptly. She may have heard a movement behind her. Or, as many commentators from Chrysostom down have held, the angels might have made some motion at the sight of the Lord behind Mary. We do not know.” (Morris)
b. Did not know that it was Jesus: Mary certainly knew who Jesus was, and it was strange that she did not immediately recognize Him. Some think it was because she was emotionally distressed and had tears in her eyes. Others speculate it was because Jesus looked somewhat different, retaining at least some of the marks of His suffering.
i. “She did not expect Him to be there, and was wholly preoccupied with other thoughts.” (Alford)
ii. “Not merely because her eyes were dim with tears, but because He was altered in appearance; as Mark (16:12).” (Dods)
iii. “There seems to have been something different about the risen Jesus so that He was not always recognized.” (Morris)
c. Why are You weeping? Whom are you seeking? Jesus did not immediately reveal Himself to Mary. It wasn’t to play some trick on her; it was to break through her unbelief and forgetfulness of Jesus’ promise of resurrection.
d. Tell me where You have laid Him, and I will take Him away: It’s possible that Mary was a large, strong woman and was physically capable of carrying away the body of a dead man. It is more likely that she was simply so filled with sorrow and devotion that she isn’t thinking through her plans carefully.
i. “Her words reveal her devotion. She never paused to consider how she would carry the corpse of a full-grown man or how she would explain her possession of it.” (Tenney)
ii. “How true is the proverb, Love feels no load! Jesus was in the prime of life when he was crucified, and had a hundred pounds weight of spices added to his body; and yet Mary thinks of nothing less than carrying him away with her, if she can but find where he is laid!” (Clarke)
e. Jesus said to her, “Mary!” Jesus had only to say one word, and all was explained. She heard in the name and the tone the voice of her beloved Messiah, and instantly called Him Rabboni (as did another Mary in John 11:28).
i. “Jesus says to her, ‘Mariam,’ the Hebrew name, of which the Greek form is Maria.” (Trench) Jesus didn’t reveal Himself to Mary by telling her who He was, but by telling her who she was to Him.
ii. Her eyes failed her, but her ears could not mistake that voice saying her name. “Many had called her by that name. She had been wont to hear it many times a day from many lips; but only One had spoken it with that intonation.” (Meyer)
iii. “Never was a one-word utterance more charged with emotion than this.” (Tasker) “Jesus can preach a perfect sermon in one word.” (Spurgeon)
iv. “In the garden of Eden, immediately after the Fall, the sentence of sorrow, and of sorrow multiplied, fell upon the woman. In the garden where Christ had been buried, after his resurrection, the news of comfort — comfort rich and divine, — came to a woman through the woman’s promised Seed, the Lord Jesus Christ. If the sentence must fall heavily upon the woman, so must the comfort come most sweetly to her.” (Spurgeon)
3. (17-18) Jesus sends Mary to tell the disciples.
Jesus said to her, “Do not cling to Me, for I have not yet ascended to My Father; but go to My brethren and say to them, ‘I am ascending to My Father and your Father, and to My God and your God.’” Mary Magdalene came and told the disciples that she had seen the Lord, and that He had spoken these things to her.
a. Do not cling to Me: Some confusion has come regarding what Jesus meant, mostly owing to the phrasing of this in the older King James Version: Touch me not. Some think Jesus told Mary not to touch Him in any way, as if her contact would somehow defile Him. Yet the sense is that Mary immediately held on to Jesus and did not want to let Him go.
i. “Probably we should understand the Greek tense here in the strict sense. The present imperative with a negative means ‘Stop doing something’ rather than ‘Do not do something’.” (Morris)
ii. “Jesus was not protesting that Mary should not touch Him lest He be defiled, but was admonishing her not to detain Him because He would see her and the disciples again.” (Tenney)
iii. “We need not be detained by that curiosity of exegesis which supposes that he still had to enter the heavenly holy of holies to complete the antitype of the Day of Atonement initiated by his sacrifice on the cross.” (Bruce)
iv. This also shows that the resurrection body of Jesus was different, yet similar to His pre-resurrection body. It was definitely real and tangible, and Jesus was not a phantom.
b. Go to My brethren and say to them: Jesus made a woman the first witness of His resurrection. The law courts of that day would not recognize the testimony of a woman, but Jesus did.
i. This also argues for the historic truth of this account. If someone fabricated this story, they would not make the first witnesses to the resurrection women, who were commonly (if unfairly) regarded as unreliable witnesses.
ii. “Celsus, the anti-Christian polemicist of the later second century, dismisses the resurrection narrative as based on the hallucinations of a ‘hysterical woman’.” (Bruce)
iii. My brethren: It is touching that Jesus referred to His disciples – those who had all forsaken Him, except for John – as His brethren. It’s also touching that Mary understood exactly who He meant.
iv. “I do not remember that the Lord Jesus ever called his disciples his brethren till that time. He called them ‘servants’; he called them ‘friends’; but now that he has risen from the dead, he says, ‘my brethren.’” (Spurgeon)
c. I am ascending to My Father and your Father, and to My God and your God: Jesus did not say, Our Father and God, and therefore pointed out a difference between His relationship with God and the disciples’ relationship with God. The One enthroned in the heavens is certainly their Father and God, but not in the identical way that He is Father and God to Jesus.
i. “He says not ‘Our Father’: in one sense therefore, He is mine, in another sense He is yours; by nature mine, by grace yours… my God, under whom I also am as a man; your God, between whom and you I am a mediator.” (Augustine)
ii. He also made specific mention of His coming ascension. The word of His ascension let them know He was raised never to die again.
C. The disciples meet the risen Jesus.
1. (19) Jesus appears in their midst.
Then, the same day at evening, being the first day of the week, when the doors were shut where the disciples were assembled, for fear of the Jews, Jesus came and stood in the midst, and said to them, “Peace be with you.”
a. The same day at evening: This took place on the same day that the tomb was found empty and Mary met the resurrected Jesus. We are told of five appearances of Jesus on the resurrection day.
· To Mary Magdalene (John 20:11-18).
· To the other women (Matthew 28:9-10).
· To the two on the road to Emmaus (Mark 16:12-13, Luke 24:13-32).
· To Peter (Luke 24:33-35, 1 Corinthians 15:5).
· To ten of the disciples, Thomas being absent (John 20:19-23).
b. Where the disciples were assembled: It was good that the disciples stayed together. Jesus told them that when He departed they must love one another, which assumes that they would stay together (John 15:17). He also prayed for their unity after their departure (John 17:11). This command was fulfilled and prayer was answered, at least in the days immediately after His crucifixion.
c. When the doors were shut: The sense is not only that the doors were shut, but secured and locked against any unwelcome entry. The idea is that the room was secure when suddenly Jesus came and stood in the midst. We aren’t told how Jesus entered the room, but the sense is that it was not in any normal way and that He seemed to simply appear.
i. “When he tells us that the doors were ‘shut’ we should understand this to mean ‘locked’ as the following explanation, that this was due to fear of the Jews, shows.” (Morris)
ii. The doors were shut and locked so they wouldn’t get hurt. Those shut and locked doors also shut out Jesus. Thankfully, Jesus was greater than the shut and locked doors, and made His way in despite them. Still, it’s better to unlock and open the door for Jesus.
iii. “Afterwards, when the Spirit came down upon them, they not only set open the doors, but preached Christ boldly in the temple without dread of danger.” (Trapp)
iv. Jesus came and stood: “The word describes that unseen arrival among them which preceded His becoming visible to them.” (Alford)
v. This strange and miraculous appearance of Jesus apparently was to demonstrate that resurrection bodies are not subject to the same limitations as our present bodies. Since we will be raised in the same manner as Jesus (Romans 6:4, 1 Corinthians 15:42-45), this gives us some hint of the nature of our future body in the resurrection.
vi. “We can scarcely say more than that John wants us to see that the risen Jesus was not limited by closed doors. Miraculously He stood in their midst.” (Morris)
vii. Jesus might have gone anywhere and done anything after His resurrection, but He wanted to be with His people. He sought out His people.
d. Peace be with you: After their desertion of Jesus on the day of His crucifixion, the disciples probably expected words of rebuke or blame. Instead, Jesus brought a word of peace, reconciling peace.
i. “‘Peace to you,’ is an assurance that there is no cause to fear, and that all is well: for they (Luke 24:36) were alarmed by His manifestation.” (Trench)
ii. “Our Master came to his cowardly, faithless disciples, and stood in the midst of them, uttering the cheering salutation, ‘Peace be unto you!’ My soul, why should he not come to thee, though thou be the most unworthy of all whom he has bought with his blood?” (Spurgeon)
2. (20-23) The risen Jesus serves His disciples.
When He had said this, He showed them His hands and His side. Then the disciples were glad when they saw the Lord. So Jesus said to them again, “Peace to you! As the Father has sent Me, I also send you.” And when He had said this, He breathed on them, and said to them, “Receive the Holy Spirit. If you forgive the sins of any, they are forgiven them; if you retain the sins of any, they are retained.”
a. He showed them His hands and His side: Jesus assured them He was actually Jesus of Nazareth and that He was really raised from the dead. Jesus did this for more than the 10 disciples present; Luke mentioned this gathering as including not only the disciples but also those who were with them gathered together (Luke 24:33) and that Jesus invited them to actually touch His body to see that it was real (Luke 24:39-40).
i. “Jesus did not come into their midst to show them a new thought, a philosophic discovery, or even a deep doctrine, or a profound mystery, or indeed anything but himself. He was a sacred egoist that day, for what he spake of was himself; and what he revealed was himself.” (Spurgeon)
b. Peace to you! Jesus just gave them the blessing of His peace (John 20:19). Perhaps the emphasis there was to calm their fear and shock at the moment (Luke 24:36). The repetition of this promise makes this gift of peace much larger and more significant. The resurrected Jesus brings peace.
i. “He had faced and defeated all the forces which destroy the peace of man. As He said, ‘Peace be unto you,’ He was doing infinitely more than expressing a wish. He was making a declaration. He was bestowing a benediction. He was imparting a blessing.” (Morgan)
· My sins are forgiven – peace.
· The slavery to sin is broken – peace.
· My Savior takes my fears and cares – peace.
· My life is settled for eternity – peace.
ii. “We must ourselves have peace both inwardly and outwardly, before we can effectively preach the gospel of peace to others.” (Boice)
c. As the Father has sent Me, I also send you: Jesus gave His disciples a mission, to continue His work on this earth. This was the commission to do what Jesus had already prayed for in John 17:18: As You sent Me into the world, I also have sent them into the world.
i. This means that both then and now, disciples are sent after the pattern of the Father’s sending of the Son. As previously observed on John 17:18, this means that disciples are sent ones – missionaries, after the Latin verb “to send.”
ii. Luke 24:33 described this meeting on the evening of Resurrection Sunday and is important: the eleven and those who were with them gathered together. It means that it was not only the 10 disciples (lacking Judas and Thomas) who received from Jesus the Holy Spirit and this commission. It means that Jesus sends every believer into the world on mission.
iii. As with John 17:18, we think of how Jesus was sent and connect it with the truth, I also send you. We are sent the same way Jesus was.
· Jesus was not sent as a philosopher like Plato or Aristotle, though He knew higher philosophy than them all.
· Jesus was not sent as an inventor or a discoverer, though He could have invented new things and discovered new lands.
· Jesus was not sent as a conqueror, though He was mightier than Alexander or Caesar.
· Jesus was sent to teach.
· Jesus was sent to live among us.
· Jesus was sent to suffer for truth and righteousness.
· Jesus was sent to rescue men.
d. Receive the Holy Spirit: Jesus gave His disciples the Holy Spirit, bringing new life and the ability to carry out their mission. It seems John noted a deliberate connection between this breathing on the disciples and when at creation God breathed life into man. This was a work of re-creation, even as God breathed life into the first man. This is where the disciples were born again.
i. “Intimating, by this, that they were to be made new men, in order to be properly qualified for the work to which he had called them; for in this breathing he evidently alluded to the first creation of man, when God breathed into him the breath of lives.” (Clarke)
ii. “The Greek word is the same as used by the LXX in those two pregnant phrases of the O.T., viz. Genesis 2:7, ‘the Lord God breathed into man’s nostrils the breath (or The Spirit) of Life’; and Ezekiel 37:9, ‘breathe into these slain and they shall live’ (the vision of the Dry Bones).” (Trench)
iii. “At an earlier stage in Jesus’ ministry the evangelist had said, ‘the Spirit was not yet present, because Jesus had not yet been glorified’ (John 7:37): now the time for imparting the Spirit has come.” (Bruce)
iv. They received the same Holy Spirit that was in Jesus; the same Spirit that empowered and enabled all His words and works. “The breathing upon them was meant to convey the impression that His very own Spirit was imparted to them.” (Dods)
e. If you forgive the sins of any: Jesus gave His disciples authority to announce forgiveness and to warn of guilt, as authorized by the Holy Spirit. We can say that Peter’s preaching on Pentecost (Acts 2:38) was an exercise of this promised power to announce forgiveness of sins.
i. The connection with the reception of the Holy Spirit is important. “The words of Jesus emphasize that the Holy Spirit is not bestowed on the church as an ornament but to empower an effective application of the work of Christ to all men.” (Tenney)
ii. This lays down the duty of the church to proclaim forgiveness to the repentant believer, and the duty of the church to warn the unbeliever that they are in danger of forfeiting the mercy of God. We don’t create the forgiveness or deny it; we announce it according to God’s word and the wisdom of the Spirit.
iii. “The Church collectively declares the conditions on which sins are remitted, and with the plenary powers of an ambassador pronounces their remission or their retention.” (Trench)
iv. “He is saying that the Spirit-filled church has the authority to declare which are the sins that are forgiven and which are the sins that are retained. This accords with the Rabbinical reaching which spoke of certain sins as ‘bound’ and others as ‘loosed’.” (Morris)
v. The work of Jesus for His disciples on resurrection Sunday gives an ongoing pattern for His work among His people. Jesus wants to continue this fourfold ministry of assurance, mission, the Holy Spirit and authority to His people today.
3. (24-25) The skepticism of Thomas, the absent disciple.
Now Thomas, called the Twin, one of the twelve, was not with them when Jesus came. The other disciples therefore said to him, “We have seen the Lord.” So he said to them, “Unless I see in His hands the print of the nails, and put my finger into the print of the nails, and put my hand into His side, I will not believe.”
a. Thomas… was not with them when Jesus came: We are not told why Thomas was not with them and Thomas was not criticized for his absence.
b. We have seen the Lord: Thomas was not criticized for his absence, but he still missed out. There was a blessing for those present that Thomas did not receive.
i. “Thomas did the very worst thing that a melancholy man can do, went away to brood in a corner by himself, and so to exaggerate all his idiosyncrasies, to distort the proportion of the truth, and hug his despair, by separating himself from his fellows. Therefore he lost what they got, the sight of the Lord.” (Maclaren)
c. Unless I see in His hands the print of the nails, and put my finger into the print of the nails, and put my hand into His side, I will not believe: Thomas is often known as Doubting Thomas, a title that misstates his error and ignores what became of him. Here we could say that Thomas didn’t doubt; he plainly and strongly refused to believe.
· Thomas refused the believe the testimony of many witnesses and reliable witnesses.
· Thomas made an extreme demand for evidence; evidence of not only sight but of touch, and to repeatedly touch the multiple wounds of Jesus.
· Thomas steadfastly refused to believe unless these conditions were met (I will not believe).
i. “Normally this is taken to indicate that Thomas was of a more skeptical turn of mind than the others, and, of course, he may have been. But another possibility should not be overlooked, namely that he was so shocked by the tragedy of the crucifixion that he did not find it easy to think of its consequences as being annulled.” (Morris)
ii. “Perhaps he had abandoned hope; – the strong evidence of his senses having finally convinced him that the pierced side and wounded hands betokened such a death that revivification was impossible.” (Alford)
iii. Adam Clarke called Thomas’ unbelief unreasonable, obstinate, prejudiced, presumptuous, and insolent. Still, it was good and significant that Thomas still wanted to be around those who believed.
iv. The unbelief of Thomas was strong, but honestly spoken. It was good that he refused to pretend to believe when he did not believe.
v. Some find it interesting that Thomas made no mention of wounds in the feet of Jesus. “There is no mention in this Gospel, or in Matthew or Luke, of the piercing of the feet. That the feet of Jesus may have been nailed to the cross, rather than fastened with a rope, which was the common practice, is an inference from Luke 24:39.” (Tasker)
4. (26-27) One week later, Jesus speaks to the skeptic Thomas.
And after eight days His disciples were again inside, and Thomas with them. Jesus came, the doors being shut, and stood in the midst, and said, “Peace to you!” Then He said to Thomas, “Reach your finger here, and look at My hands; and reach your hand here, and put it into My side. Do not be unbelieving, but believing.”
a. After eight days: The idea is that Jesus had this meeting with the disciples now including Thomas on the following Sunday. Jesus entered the room in the same mysterious and remarkable way (the doors being shut, and stood in the midst). Jesus also gave the same greeting (Peace to you!).
i. The locked doors of their meeting room show that though they believed Jesus to be raised from the dead, that truth had yet to work its meaning and significance into every area of their thinking and actions.
ii. There is significance in that these two important meetings with Jesus and His assembled disciples took place on Sundays; this is the first indication we have of Sunday meetings of the disciples. “The memory of this coming of the Lord to his disciples may well have something to do with the church’s early practice of meeting together on the evening of the first day of the week and bespeaking his presence with them in the words Marana tha, ‘Our Lord, come!’” (Bruce)
b. Reach your finger here, and look at My hands; and reach your hand here, and put it into My side: Jesus granted Thomas the evidence he demanded. We suppose that Jesus was not obligated to do this; He could have rightly demanded faith from Thomas on the basis of the reliable evidence from others. Yet in mercy and kindness, Jesus gave Thomas what he asked for.
i. It must have been a surprise to Thomas that Jesus repeated back to him just was he said to the other disciples (John 20:25). Jesus knew the demands and unbelief of Thomas.
ii. “There is no surer way of making a good man ashamed of his wild words than just to say them over again to him when he is calm and cool.” (Maclaren)
iii. Jesus’ interaction with Thomas shows that the resurrected Jesus is full of love and graciousness and gentleness to His people. That didn’t change. “The whole conversation was indeed a rebuke, but so veiled with love that Thomas could scarcely think it so.” (Spurgeon)
iv. There is a clear lesson: When you want assurance, look to the wounds of Jesus. They are evidence of His love, of His sacrifice, of His victory, of His resurrection.
c. Do not be unbelieving, but believing: Jesus clearly commanded Thomas to stop his unbelief and to start believing. Jesus was generous and merciful to Thomas and his unbelief, but Jesus did not praise his unbelief. Jesus wanted to move him from doubt and unbelief to faith.
i. Jesus did not even credit to Thomas his prior belief, or his believe in the prior teaching and miracles of Jesus. Because Thomas did not believe in the resurrected Jesus, Jesus considered him unbelieving.
ii. Often God does not condemn our doubt and He also often reveals and does remarkable things to speak to our doubt and unbelief. But doubt and unbelief are not desired conditions for the disciple of Jesus. If they are checkpoints along a path leading to faith they should be dealt with a generous love; but doubt and unbelief should never be thought of as destinations for the disciple.
5. (28-29) Thomas responds in faith.
And Thomas answered and said to Him, “My Lord and my God!” Jesus said to him, “Thomas, because you have seen Me, you have believed. Blessed are those who have not seen and yet have believed.”
a. My Lord and my God: Thomas made an immediate transition from declared unbelief (John 20:25) to radical belief. He addressed Jesus with titles of deity, calling Him Lord and God. It is also significant that Jesus accepted these titles, and did not tell Thomas, “Don’t call Me that.”
i. “Sight may have made Thomas believe that Jesus was risen, but it was something other and more inward than sight that opened his lips to cry, ‘My Lord and my God!’” (Maclaren)
ii. “Thomas now avows the faith which a foretime he had disclaimed. ‘I will not believe,’ said he, ‘except-except- except.’ Now he believes a great deal more than some of the other Apostles did; so he openly avows it. He was the first divine who ever taught the Deity of Christ from his wounds.” (Spurgeon)
iii. “The words are not a mere exclamation of surprise. That is forbidden by [greek text]; they mean, ‘Thou are my Lord and my God’. The repeated pronoun lends emphasis.” (Dods)
iv. “For a Jew to call another human associate ‘my Lord and my God’ would be almost incredible….Thomas, in the light of the Resurrection, applied to Jesus the titles of Lord (kyrios) and God (theos), both of which were titles of deity.” (Tenney)
v. “In Pliny’s letter to Trajan (112 A.D.) he describes the Christians as singing hymns to Christ as God.” (Dods)
vi. Thomas was honest enough to say when he didn’t believe (John 20:25), but also honest enough to follow the evidence to its full meaning. Thomas wasn’t given to half-unbelief or half-faith.
vii. Spurgeon considered many aspects of Thomas’ declaration.
· It was a devout expression of holy wonder.
· It was an expression of immeasurable delight.
· It indicates a complete change of mind.
· It was an enthusiastic profession of allegiance to Christ.
· It was a distinct and direct act of adoration, worship.
viii. “Whosoever will be saved, before all things it is necessary that he be able to unite with Thomas heartily in this creed, ‘My Lord and my God.’ I do not go in for all the minute distinctions of the Athanasian Creed, but I have no doubt that it was absolutely needful at the time it was written, and that it materially helped to check the evasions and tricks of the Arians. This short creed of Thomas I like much better, for it is brief, pithy, full, sententious, and it avoids those matters of detail which are the quicksands of faith.” (Spurgeon)
b. Thomas, because you have seen Me, you have believed: Commentators divide over whether or not Thomas actually did as Jesus invited him, to actually touch the wounds of Jesus. That Jesus said, because you have seen Me and not because you have seen and touched Me gives some evidence to the idea that Thomas did not actually touch the wounds of Jesus.
c. Blessed are those who have not seen and yet have believed: There is a special promise blessing given to those who believe. Thomas demanded to see and touch before he would believe in the resurrected Jesus. Jesus understood that the testimony of reliable witnesses was evidence enough, and there was a blessing for those who accepted that sufficient evidence.
i. “I believe He is speaking, not of a subjective faith, but of a satisfied faith. He is speaking of faith that is satisfied with what God provides and is there fore not yearning for visions, miracles, esoteric experiences or various form of success as evidence of God’s favor.” (Boice)
ii. “From this we learn that to believe in Jesus, on the testimony of his apostles, will put a man into the possession of the very same blessedness which they themselves enjoyed. And so has God constituted the whole economy of grace that a believer, at eighteen hundred years’ distance from the time of the resurrection, suffers no loss because he has not seen Christ in the flesh.” (Clarke)
iii. These words of Jesus are another beatitude, and promise a great blessing. Spurgeon considered some ways that this blessing would be diminished.
· When we demand for a voice, a vision, a revelation to prove our faith.
· When we demand for some special circumstances to prove our faith.
· When we demand for some ecstatic experience.
· When we demand for an answer to every difficult question or objection.
· When we demand what men think of as success in our work of Jesus.
· When we demand that others support us in our faith.
iv. The faith of Thomas becomes the climax of the book. Throughout the Gospel of John Jesus has triumphed over sickness, sin, evil men, death and sorrow. Now with Thomas, Jesus conquered unbelief.
6. (30-31) The summary statement of the Gospel of John.
And truly Jesus did many other signs in the presence of His disciples, which are not written in this book; but these are written that you may believe that Jesus is the Christ, the Son of God, and that believing you may have life in His name.
a. Jesus did many other signs: John admits that he presented an incomplete collection. He couldn’t possibly record in writing all that Jesus said and did (John 21:25).
i. One collects everything possible about a dead prophet; it is all one has of him. But one only tells enough of a living person to introduce one’s hearers to him. John trusts that a personal relationship with Jesus will reveal more to the believer.
ii. In this book: “That this was the original or intended conclusion of the gospel is shown by the use of the words ‘in this book,’ which indicate that the writer was now looking back on it as a whole.” (Dods)
b. These are written that you may believe that Jesus is the Christ, the Son of God: Though there were many other signs, John selected the signs presented in His Gospel to explain Jesus and bring readers to faith in Jesus as Messiah and God. This really isn’t a book about signs – it is a book about Jesus. The signs are helpful as they reveal Jesus.
i. The Gospel – and all of the Bible – was written so that we may believe, not that we might doubt. “There is no text in the whole Book which was intended to create doubt. Doubt is a seed self-sown, or sown by the devil, and it usually springs up with more than sufficient abundance without our care.” (Spurgeon)
ii. John 2:11 speaks of the beginning of signs, and throughout his Gospel John has listed at least seven signs.
· John 2:1-11 – Water into wine.
· John 4:46-54 – Healing of the nobleman’s son.
· John 5:1-15 – Healing at the pool of Bethesda.
· John 6:1-14 – Feeding the 5,000.
· John 6:15-21 – Jesus walks on water.
· John 9:1-12 – Healing of the man born blind.
· John 11:1-44 – Lazarus raised from the dead.
iii. The greatest signs of all were the death and resurrection of Jesus. Collectively, these signs give strong foundation for faith in Jesus as Messiah and God. That faith isn’t a blind leap; it is a reasonable step based on strong evidence.
iv. The Son of God: “The title does not, of course, imply biological descent like that of the Greco-Roman demigods; but the metaphor of sonship expresses the unity of nature, close fellowship, and unique intimacy between Jesus and the Father.” (Tenney)
c. And that believing you may have life in His name: John understood that faith in Jesus as Messiah and God had value beyond the honorable recognition of truth. It also carried the promise of life in His name. This was life that transformed John himself, and he wanted that same life and transformation for all through his Gospel account.
i. This belief isn’t complicated. Our response is as simple as ABC: Accept, Believe, and Commit. It isn’t always easy, but it isn’t complicated.
ii. Life in His name: “Through his name does not mean ‘through the naming of His name’, but through the power of the Person who bears the name. In the Bible the ‘name’ of God is not merely the name by which He is designated, but all that He is in Himself.” (Tasker)
Cookie and Privacy Settings
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Check to enable permanent hiding of message bar and refuse all cookies if you do not opt in. We need 2 cookies to store this setting. Otherwise you will be prompted again when opening a new browser window or new a tab.
Click to enable/disable essential site cookies.
Google Analytics Cookies
These cookies collect information that is used either in aggregate form to help us understand how our website is being used or how effective our marketing campaigns are, or to help us customize our website and application for you in order to enhance your experience.
If you do not want that we track your visit to our site you can disable tracking in your browser here:
Click to enable/disable Google Analytics tracking.
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Click to enable/disable Google Webfonts.
Google Map Settings:
Click to enable/disable Google Maps.
Google reCaptcha Settings:
Click to enable/disable Google reCaptcha.
Vimeo and Youtube video embeds:
Click to enable/disable video embeds.
Other cookies
The following cookies are also needed - You can choose if you want to allow them:
Click to enable/disable _ga - Google Analytics Cookie.
Click to enable/disable _gid - Google Analytics Cookie.
Click to enable/disable _gat_* - Google Analytics Cookie.
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page. | A. Discovery of the empty tomb
Now on the first day of the week Mary Magdalene went to the tomb early, while it was still dark, and saw that the stone had been taken away from the tomb. Then she ran and came to Simon Peter, and to the other disciple, whom Jesus loved, and said to them, “They have taken away the Lord out of the tomb, and we do not know where they have laid Him.”
a. Now on the first day of the week Mary Magdalene went to the tomb early: Jesus was crucified on Friday (or on Thursday by some accounts). After His entombment, the tomb was sealed and guarded by Roman soldiers (Matthew 27:62-66). The tomb stayed sealed and guarded until discovered on the first day of the week… early, while it was still dark.
b. Mary Magdalene… she ran and came to Simon Peter: Other gospels explain she was not the only woman to come to the tomb that morning (at least three other women accompanied her). Mary was the one who ran back and told the disciples about the empty tomb, so John mentions her.
i. Jesus had cast seven demons out of this Mary (Luke 8:2, Mark 16:9). Her troubled past didn’t disqualify her from being the first witness of the resurrected Jesus and His first commissioned messenger of His resurrection.
ii. The women came to complete the work begun by Joseph and Nicodemus. “Probably, in view of the lateness of the hour and the nearness of the sabbath, Nicodemus was not able to use all the spices he had brought in the way intended.” (Morris)
c. They have taken away the Lord out of the tomb: When she saw the empty tomb, Mary’s first reaction was to think the body of Jesus was stolen. She wasn’t wishing for or anticipating the resurrection of Jesus, and she certainly did not imagine it out of hope.
i. We do not know where: “The plural may naturally be accepted as confirming Mark’s account that she was not alone.” (Dods)
2. (3-4) Peter and John run to the tomb.
| yes |
Archaeology | Was the tomb of Jesus discovered? | no_statement | the "tomb" of jesus was not "discovered".. there is no evidence to suggest that the "tomb" of jesus was "discovered".. the discovery of the "tomb" of jesus has not been confirmed. | https://www.reasonablefaith.org/writings/scholarly-writings/historical-jesus/the-historicity-of-the-empty-tomb-of-jesus/ | The Historicity of the Empty Tomb of Jesus | Scholarly Writings ... | Get Dr. Craig's newsletter and keep up with RF news and events.
The Historicity of the Empty Tomb of Jesus
Summary
An examination of both Pauline and gospel material leads to eight lines of evidence in support of the conclusion that Jesus's tomb was discovered empty: (1) Paul's testimony implies the historicity of the empty tomb, (2) the presence of the empty tomb pericope in the pre-Markan passion story supports its historicity, (3) the use of 'on the first day of the week' instead of 'on the third day' points to the primitiveness of the tradition, (4) the narrative is theologically unadorned and non-apologetic, (5) the discovery of the tomb by women is highly probable, (6) the investigation of the empty tomb by the disciples is historically probable, (7) it would have been impossible for the disciples to proclaim the resurrection in Jerusalem had the tomb not been empty, (8) the Jewish polemic presupposes the empty tomb.
Until recently the empty tomb has been widely regarded as both an offense to modern intelligence and an embarrassment for Christian faith; an offense because it implies a nature miracle akin to the resuscitation of a corpse and an embarrassment because it is nevertheless almost inextricably bound up with Jesus' resurrection, which lies at the very heart of the Christian faith. But in the last several years, a remarkable change seems to have taken place, and the scepticism that so characterized earlier treatments of this problem appears to be fast receding. [1] Though some theologians still insist with Bultmann that the resurrection is not a historical event, [2] this incident is certainly presented in the gospels as a historical event, one of the manifestations of which was that the tomb of Jesus was reputedly found empty on the first day of the week by several of his women followers; this fact, at least, is therefore in principle historically verifiable. But how credible is the evidence for the historicity of Jesus' empty tomb?
In order to answer this question, we need to look first at one of the oldest traditions contained in the New Testament concerning the resurrection. In Paul's first letter to the Corinthians (AD 56-57) he cites what is apparently an old Christian formula (1 Cor 15. 3b-5), as is evident from the non-Pauline and Semitic characteristics it contains. [3] The fact that the formula recounts, according to Paul, the content of the earliest apostolic preaching (I Cor 15. 11), a fact confirmed by its concordance with the sermons reproduced by Luke in Acts, [4] strongly suggests that the formula originated in the Jerusalem church. We know from Paul's own hand that three years after his conversion (AD 33-35) at Damascus, he visited Jerusalem, where he met personally Peter and James (Gal 1. 18-19). He probably received the formula in Damascus, perhaps in Christian catechesis; it is doubtful that he received it later than his Jerusalem visit, for it is improbable that he should have replaced with a formula personal information from the lips of Peter and James themselves. [5] The formula is therefore probably quite old, reaching back to within the first five years after Jesus' crucifixion. It reads:
Does this formula bear witness to the fact of Jesus' empty tomb? Several questions here need to be kept carefully distinct. First we must decide: (1) does Paul accept the empty tomb, and (2) does Paul mention the empty tomb? It is clear that (1) does not imply (2), but (2) would imply (1). Orin other words, just because Paul may not mention the empty tomb, that does not mean he does not accept the empty tomb. Too many New Testament scholars have fallen prey to Bultmann's fallacy: 'Legenden sind die Geschichten vom leeren Grab, von dem Paulus noch nicht weiss.' [6] Paul's citation of Jesus' words at the Last Supper (I Cor 11: 23-26) shows that he knew the context of the traditions he delivered; but had the Corinthians not been abusing the eucharist this knowledge would have remained lost to us. So one must not too rashly conclude from silence that Paul 'knows nothing' of the empty tomb. Next, if Paul does imply the empty tomb, then we must ask: (1) does Paul believe Jesus' tomb was empty, and (2) does Paul know Jesus' tomb was empty? Again, as Grass is quick to point out, (1) does not imply (2); [7] but (2) would imply (1). In other words, does Paul simply assume the empty tomb as a matter of course or does he have actual historical knowledge that the tomb of Jesus was empty? Thus, even if it could be proved that Paul believed in a physical resurrection of the body, that does not necessarily imply that he knew the empty tomb for a fact.
Some exegetes have maintained that the statement of the formula 'he was buried' implies, standing as it does between the death and the resurrection, that the tomb was empty. [8] But many critics deny this, holding that the burial does not stand in relation to the resurrection, but to the death, and as such serves to underline and confirm the reality of the death. [9] The closeZusammenhang of the death and burial is said to be evident in Rom 6, where to be baptized into Christ's death is to be baptized into his burial. Grass maintains that for the burial to imply a physical resurrection the sentence would have to read apethanen ... kai hoti egegertai ek tou taphou. As it is the burial does not therefore imply that the grave was empty. Grass also points out that Paul fails to mention the empty tomb in the second half of I Cor 15, an instructive omission since the empty tomb would have been a knock-down argument against those who denied the bodily resurrection. [10] It is also often urged that the empty tomb was no part of the early kerygma and is therefore not implied in the burial.
Now while I should not want to assert that the 'he was buried' was included in the formula in order to prove the empty tomb, it seems to me that the empty tomb is implied in the sequence of events related in the formula. For in saying that Jesus died -- was buried -- was raised -- appeared, one automatically implies that the empty grave has been left behind. The four-fold hoti and the chronological series of events weighs against subordinating the burial to the death. [11] In baptism the burial looks forward with confidence to the rising again (Rom 6. 4; Col. 2. 13). [12] And even if one denied the evidence of the four-fold hoti and the chronological sequence, the very fact that a dead-and-buried man was raised itself implies an empty grave. Grass's assertion that the formula should read egegertai ek tou taphou is not so obvious when we reflect on the fact that in I Cor 15. 12 Paul does write ek nekron egegertai (cf. I Thess 1. 10; Rom 10. 9; Gal 1. 1; Mt. 27. 64; 28. 7). [13] In being raised from the dead, Christ is raised from the grave. In fact the very verbs egegertai and anistanai imply that the grave is left empty. [14] The notion of resurrection is unintelligible with regard to the spirit or soul alone. The very words imply resurrection of the body. It is the dead man in the tomb who awakens and is physically raised up to live anew. Thus the grave must be empty. [15] And really, even today were we to be told that a man who died and was buried rose from the dead and appeared to his friends, only a theologian would think to ask, 'But was his body still in the grave?' How much more is this true of first century Jews, who shared a much more physical conception of resurrection than we do! [16]Grass's argument that had Paul believed in the empty tomb, then he would have mentioned it in the second half of I Cor 15 turns back upon Grass; for if Paul did not believe in the empty tomb, as Grass contends, then why did he not mention the purely spiritual appearance of Christ to him alluded to I Cor 15. 8 as a knock-down argument for the immateriality of Christ's resurrection body? Grass can only reply that Paul did not appeal to his vision of Jesus to prove that the resurrection body would be heavenly and glorious because the meeting 'eluded all description'. [17] Not at all; Paul could have said he saw a heavenly light and heard a voice (Acts 22. 6-7; 26. 13-14). In fact the very ineffability of the experience would be a positive argument for immateriality, since a physical body is not beyond all description. Grass misunderstands Paul's intention in discussing the resurrection body in I Cor 15. 35-56. Paul does not want to prove that it is physical, for that was presupposed by everyone and was perhaps what the Corinthians protested at. He wants to prove that the body is in some sense spiritual, and thus the Corinthians ought not to dissent. Hence, the mention of the empty tomb is wholly beside the point. There is thus no reason to mention the empty tomb, but good reason to appeal to Paul's vision, which he does not do. Could it be that in the appearance to him Paul did not see a determinative answer to the nature of the resurrection body? Finally as to the absence of the empty tomb in the kerygma, the statement 'he was buried' followed by the proclamation of the resurrection indicates that the empty tomb was implied in the kerygma. The formula is a summary statement, [18] and it could very well be that Paul was familiar with the historical context of the simple statement in the formula, which would imply that he not only accepted the empty tomb, but knew of it as well. The tomb is certainly alluded to in the preaching in Acts 2. 24-32. [19] The empty tomb is also implicit in Paul's speech in Antioch of Pisisidia, which follows point for point the outline of the formula in 1 Cor. 15. 3-5: '. . . they took him down from the tree, and laid him in a tomb. But God raised him from the dead; and for many days he appeared to those who came up with him from Galilee to Jerusalem.' (Acts 13. 29-31). No first century Jew or pagan would be so cerebral as to wonder if the tomb was empty or not. That the empty tomb is not more explicitly mentioned may be simply because it was regarded as selbstverständlich, given the resurrection and appearances of Jesus. Or again, it may be that the evidence of the appearances so overwhelmed the testimony of legally unqualified women to the empty grave that the latter was not used as evidence. But the gospel of Mark shows that the empty tomb was important to the early church, even if it was not appealed to as evidence in evangelistic preaching. So I think it quite apparent that the formula and Paul at least accept the empty tomb, even if it is not explicitly mentioned. [20]
A second possible reference to the empty tomb is the phrase 'on the third day.' Since no one actually saw the resurrection of Jesus, how could it be dated on the third day? Some critics argue that it was on this day that the women found the tomb empty, so the resurrection came to be dated on that day. [21] Thus, the phrase 'on the third day' not only presupposes that a resurrection leaves an empty grave behind, but is a definite reference to the historical fact of Jesus' empty tomb. But of course there are many other ways to interpret this phrase: (1) The third day dates the first appearance of Jesus. (2) Because Christians assembled for worship on the first day of the week, the resurrection was assigned to this day. (3) Parallels in the history of religions influenced the dating of the resurrections on the third day. (4) The dating of the third day is lifted from Old Testament scriptures. (5) The third day is a theological interpretation indicating God's salvation, deliverance, and manifestation. Each of these needs to be examined in turn.
1. The third day dates the first appearance of Jesus. [22] In favor of this view is the proximity of the statement 'raised on the third day in accordance with the scriptures' with 'he appeared to Cephas, then to the Twelve'. Because Jesus appeared on the third day, the resurrection itself was naturally dated on that day. The phrase 'according to the scriptures' could indicate that the Christians, having believed Christ rose on the third day, sought out appropriate proof texts. This understanding has certain plausibility, for whether the disciples remained in Jerusalem or fled to Galilee, they could have seen Jesus on the third day after his death. If it can be proved, however, that the disciples returned slowly to Galilee and saw Christ only some time later, then this view would have to be rejected. A discussion of this question must be deferred until later. Against this understanding of the third day it is sometimes urged that the Easter reports do not use the expression 'on the third day' but prefer to speak of 'the first day of the week' (Mk 16. 2; Mt. 28. 1; Lk 24. 1; Jn 20. 1, 19). [23] All the 'third day' references are in the Easter kerygma, not the Easter reports. This is said to show not only the independence of the Easter reports from the kerygma, but also that neither the empty tomb nor the appearances of Christ can be the direct cause of the 'third day' motif. [24]
But why could they not be the root cause? All that has been proved by the above is that the Easter reports and the Easter preaching are literarily distinct, but that cannot prove that they are not twin offshoots of an original event. The event could produce the report on the one hand; on the other hand it would set the believers a-searching in the Old Testament for fulfilled scriptures. In this search they could find and adopt the language of the third day because, according to Jewish reckoning, the first day of the week was in fact the third day after Jesus' death. [25] Scriptures in hand, they could thus proclaim 'he was raised on the third day in accordance with the Scriptures'. This language could then be used by the evangelists outside the Easter reports or actually interwoven with them, as by Luke. Thus the same root event could produce two different descriptions of the day of the resurrection. But was that event the first appearance of Jesus? Here one cannot exclude the empty tomb from playing a role, for the time reference 'the first day of the week' (= 'on the third day') refers primarily to it. If the appearances first occurred on the same day as the discovery of the empty tomb, then these two events together would naturally date the resurrection, and the 'third day' language could reflect the LXX formulation, which is found in I Cor 15. 4 and was worked into the traditions underlying the gospels. So I think it unlikely that the date 'on the third day' refers to the day of the first appearance alone.
2. Because Christians assembled on the first day of the week, the resurrection was assigned to this day.[26] Although this hypothesis once enjoyed adherents, it is now completely abandoned. Rordorf's study Der Sonntag has demonstrated to the satisfaction of New Testament critics that the expression 'raised on the third day' has nothing to do with Christian Sunday worship. [27] More likely would be that because the resurrection was on the third day, Christians worshipped on that day. But even though the question of how Sunday came to be the Christian special day of worship is still debated, no theory is today propounded which would date the resurrection as a result of Sunday as a worship day.
3. Parallels in the history of religions influenced the dating of the resurrection on the third day. [28] In the hey-day of the history of religions school, all sorts of parallels in the history of other religions were adduced in order to explain the resurrection on the third day; but today critics are more sceptical concerning such alleged parallels. The myths of dying and rising gods in pagan religions are merely symbols for processes of nature and have no connection with a real historical individual like Jesus of Nazareth. [29] The three-day motif is found only in the Osiris and perhaps Adonis cults, and, in Grass's words, it is 'completely unthinkable' that the early Christian community from which the formula stems could be influenced by such myths. [30] In fact there is hardly any trace of cults of dying and rising gods at all in first century Palestine. It has also been suggested that the three day motif reflects the Jewish belief that the soul did not depart decisively from the body until after three days. [31] But the belief was actually that the soul departed irrevocably on the fourth day, not the third; in which case the analogy with the resurrection is weaker. But the decisive count against this view is that the resurrection would not then be God's act of power and deliverance from death, for the soul had not yet decisively left the body, but merely re-entered and resuscitated it. This would thus discredit the resurrection of Jesus. If this Jewish notion were in mind, the expression would have been 'raised on the fourth day' after the soul had forever abandoned the body and all hope was gone (cf. the raising of Lazarus). Some critics have thought that the third day reference is meant only to indicate, in Hebrew reckoning, 'a short time' or 'a while'. [32] But when one considers the emphasis laid on this motif not only in the formula but especially in the gospels, then so indefinite a reference would not have the obvious significance which the early Christians assigned to this phrase.
4. The dating of the third day is lifted from Old Testament scriptures.[33] Because the formula reads 'on the third day in accordance with the scriptures' many authors believe that the third day motif is drawn from the Old Testament, especially Hos 6. 2, which in the LXX reads te hemera te trite. [34] Although Metzger has asserted, with appeal to I Maccabees 7. 16-17 that the 'according to the scriptures' may refer to the resurrection, not the third day, [35] this view is difficult to maintain in light, not only of the parallel in I Cor 15. 3, but especially of Lk 24. 45 where the third day seems definitely in mind. Against taking the 'on the third day' to refer to Hos 6. 2 it has been urged that no explicit quotation of the text is found in the New Testament, or indeed anywhere until Tertullian (Adversus Judaeos 13). [36] New Testament quotations of the Old Testament usually mention the prophet's name and are of the nature of promise-fulfillment. But nowhere do we find this for Hos 6. 2. Grass retorts that there is indirect evidence for Christian use of Hos 6. 2 in the Targum Hosea's dropping the reference to the number of days; the passage had to be altered because Christians had preempted the verse. Moreover, Jesus' own 'predictions', written back into the gospel story by believers after the event, obviated the need to cite a scripture reference. [37] But Grass's first point is not only speculative, but actually contradicted by the fact that later Rabbis saw no difficulty in retaining the third day reference in Hosea. [38] No conclusion can be drawn from Targum Hosea's change in wording, for the distinctive characteristic of this Targum is its free haggadic handling of the text. And this still says nothing about New Testament practice of citing the prophet's name. As for the second point, Matthew's citation of Jonah (Mt. 12. 40) makes this rather dubious. According to Bode, Matthew's citation is the decisive argument against Hos 6. 2, since it shows the latter was not the passage which Christians had in mind with regard to the three day motif. [39] But to my mind the greatest difficulty with the Hos 6. 2 understanding of 'on the third day' is that it necessitates that the disciples without the instigation of any historically corresponding event would find and adopt such a scripture reference. For this understanding requires that no appearances occurred and no discovery of the empty tomb was made on the third day/ first day of the week. Otherwise these events would be the basis for the date of the resurrection, not Hos 6. 2 alone. But if there were no such events, then it is very unlikely that the disciples should land upon Hos 6. 2 and apply it to Jesus's resurrection. It is much more likely that such events should prompt them to search the scriptures for appropriate texts, which could then be interpreted in light of the resurrection (Jn 2. 22; 12, 16; 20. 8-9). [40] And insofar as the empty tomb tradition or appearance traditions prove accurate the understanding in question is undermined. For if the empty tomb was discovered on the first day of the week or Peter saw Jesus on the third day, then the view that 'the third day' was derived solely from scripture is untenable. At most one could say that the language of the LXX was applied to these events. The falsity of the gospel traditions concerning both the discovery of the empty tomb and the day of the first appearance is thus a sine qua non for the Hos 6. 2 understanding, and hence should either of these traditions prove accurate, the appeal to Hos 6. 2 as the basis (as opposed to the language) for the date of the resurrection must be rejected.
5. The third day is a theological interpretation indicating God's salvation, deliverance, and manifestation.[41] This understanding is, I think, the only serious alternative to regarding the third day motif as based on the historical events of the resurrection, and it has been eloquently expounded by Lehmann and supported by Bode and McArthur as well. To begin with, there are nearly 30 passages in the LXX that use the phrase te hemera te trite to describe events that happened on the third day. [42] On the third day Abraham offered Isaac (Gen. 22. 4; cf. Gen. 34. 25; 40. 20). On the third day Joseph released his brothers from prison (Gen. 42. 18). After three days God made a covenant with his people and gave the law (Ex 19. 11, 16; cf. Lev 8. 18; Num. 7. 24; 19. 12, 19; Judg 19. 8; 20. 30). On the third day David came to Ziklag to fight the Amalekites (I Sam 30. 1) and on the third day thereafter heard the news of Saul and Jonathan's death (2 Sam 1, 2). On the third day the kingdom was divided (I Kings 12. 24; cf. 2 Chron 10. 12). On the third day King Hezekiah went to the House of the Lord after which he was miraculously healed (2 Kings 20. 5, 8). On the third day Esther began her plan to save her people (Esther 5. 1; cf. 2 Mace II. 18). The only passage in the prophets mentioning the third day is Hos 6. 2. Thus, the third day is a theologically determined time at which God acts to bring about the new and the better, a time of life, salvation, and victory. On the third day comes resolution of a difficulty through God's act.
A second step is to consider the interpretation given to such passages in Jewish Midrash (Midrash Rabbah, Genesis [Mikketz] 91. 7; Midrash Rabbah, Esther 9. 2; Midrash Rabbah, Deuteronomy [Ki Thabo] 7. 6; Midrash on Psalms 22. 5). [43] From Jewish Midrash it is evident that the third day was the day when God delivered the righteous from distress or when events reached their climax. It is also evident that Hos 6. 2 was interpreted in terms of resurrection, albeit at the end of history. The mention of the offering of Isaac on the third day is thought to have had a special influence on Christian thought, as we shall see.
A third step in the argument is comparison of other Rabbinical literature concerning the third day with regard to the resurrection (Targum Hosea 6. 2; B. Sanhedrin 97a; B. Rosh Hashanah 3 la; P. Berakoth S. 2; P. Sanhedrin 11. 6; Pirkê de Rabbi Eliezer 51. 73b-74a; Tanna de-be Ehyyahu, p. 29). [44] These passages make it evident that the rabbis were interpreting Hos 6. 2 in the sense of an eschatological resurrection.
Now according to Lehmann, when one brings together the testimonies of the Midrash Rabbah, the rabbinic writings, and the passages from the LXX, then it becomes highly probable that I Cor 15. 4 can be illuminated by these texts and their theology. Of particular importance here is the sacrifice of Isaac, which grew to have a great meaning for Jewish theology. [45] In pre-Christian Judaism the sacrifice of Isaac was already brought into connection with the Passover. He became a symbol of submission and self-sacrifice to God. The offering of Isaac was conceived to have salvific worth. In the blood of the sacrifices, God saw and remembered the sacrifice of Isaac and so continued His blessing of Israel. This exegesis of Gen. 22 leaves traces in Rom 4. 17, 25; 8. 32 and Heb 11. 17-19. This last text particularly relates the resurrection of Jesus to the sacrifice of Isaac. When we consider the formula in I Cor 15, with its Semitic background, then it is much more probable that the expression 'on the third day' reflects the influence of Jewish traditions that later came to be written in the Talmud and Midrash than that it refers to Hos 6. 2 alone as a proof text. Thus, 'on the third day' does not mark the discovery of the empty tomb or the first appearance, nor is it indeed any time indicator at all, but rather it is the day of God's deliverance and victory. It tells us that God did not leave the Righteous One in distress, but raised him up and so ushered in a new eon.
Lehmann's case is well-documented and very persuasive; but doubt begins to arise when we consider the dates of the citations from Talmud and Midrash. [46] For all of them are hundreds of years later than the New Testament period. Midrash Rabbah, which forms the backbone of Lehmann's case, is a collection from the fourth to the sixth centuries. Pirkê de Rabbi Eliezer is a collection from the outgoing eighth century. The Midrash on Psalm 22 contains the opinions of the Amoraim, rabbinical teachers of the third to the fifth centuries. The Babylonian Talmud and the so-called Jerusalem Talmud are the fruit of the discussions and elaborations of these Amoraim on the Mishnah, which was redacted, arranged, and revised by Rabbi Judah ha-Nasi about the beginning of the third century. The Mishnah itself, despite its length, never once quotes Hos 6. 2; Gen. 22. 4; 42. 17; Jonah 2. 1; or any other of the passages in question which mention the third day. The Targum on Hosea, says McArthur, is associated with Jonathan b. Uzziel of the first century; but this ascription is quite uncertain and in any case tells us nothing concerning Hos 6. 2 in particular, since the Targum as a whole involves a confluence of early and late material. Thus all the citations concerning the significance of the third day and interpreting Hos 6. 2 in terms of an eschatological resurrection may well stem from literature centuries removed from the New Testament period,
Lehmann believes that these citations embody traditions that go back orally prior to the Christian era. But if that is the case then should not we expect to confront these motifs in Jewish literature contemporaneous with the New Testament times, namely, the Apocrypha and Pseudepigrapha? One would especially expect to confront the third day motif in the apocalyptic works. In fact, it is conspicuously absent. The book of I Enoch, which is quoted in Jude, had more influence on the New Testament writers than any other apocryphal or pseudepigraphic work and is a valuable source of information concerning Judaism from 200 BC to AD 100. In this work the eschatological resurrection is associated with the number seven, not three (91. 15-16; 93). Similarly in 4 Ezra, a first century compilation, the eschatological resurrection takes place after seven days (7. 26-44). A related work from the second half of the first century and a good representative of Jewish thought contemporaneous with the New Testament, 2 Baruch gives no indication of the day of the resurrection at history's end (50-5 1). Neither does 2 Macc 7. 9- 42; 12. 43-45 or the Testament of the Twelve Patriarchs (Judah) 25. 1, 4; (Zebulun) 10. 2; (Benjamin) 10. 6-18. All these works, which stem from intertestamental or New Testament times, have a doctrine of eschatological resurrection, but not one of them knows of the third day motif. Evidently the number seven was thought to have greater divine import than the number three (cf. Rev 1. 20; 6. 1; 8. 2; 15. 1, 7). In 2 Macc 5. 14; 11. 18 we find 'three days' and 'third day' mentioned in another context, but their meaning is wholly non-theological, indicating only 'a short time' or 'the day after tomorrow'. Lehmann's case would be on firmer ground if he were able to find passages in Jewish literature contemporary with the New Testament which employ the third day motif or associate the resurrection with the third day. It appears that this interpretation is a peculiarity of later rabbinical exegesis of the Talmudic period.
Moreover, there is no indication that the New Testament writers were aware of such exegesis. Lehmann states that the conception of the offering of Isaac as a salvific event is characteristic of the New Testament. But this is not the question; the issue is whether the interpretation of the offering of Isaac on the third day plays a role in the New Testament. Here the evidence is precisely to the contrary: Rom 4. 17, 25 not only have nothing to do with the offering of Isaac (it is to Gen. 15, not 22 that Paul turns for his doctrine of justification by faith), but refer to Jesus's resurrection without mentioning the third day; Rom 8. 32 makes no explicit mention of Isaac and no mention, implicit or explicit, of the resurrection, not to speak of the third day; Heb 11. 17-19 does not in fact explicitly use Isaac as a type of Christ, but more importantly does not in any way mention the third day. This latter passage seems to be crucial, for in this passage, of all places, one would expect the mention of the third day theme in connection with the resurrection. But it does not appear. This suggests that the connection of the sacrifice of Isaac with a third day motif was not yet known. In the other passage in which the offering of Isaac is employed (Jas 2. 21-23), there is also no mention of the third day motif. (And James even goes on to use the illustration of Rahab the harlot and the spies, again without mentioning the three day theme, as did later Rabbinic exegesis.) Hence, the appeal to the offering of Isaac as evidence that the New Testament knows of the rabbinic exegesis concerning the theological significance of the third day is counter- productive.
Finally, Lehmann's interpretation labors under the same difficulty as did the appeal to Hos 6. 2 alone; namely, in order for this interpretation to be true, the traditions of the discovery of the empty tomb and of the time of the first appearances must be false. For if these events did occur on the third day/first day of the week, this would undoubtedly have affected the early believers' dating of those events. But then the dating cannot be wholly ascribed to theological motifs. If we say that the traditions are false, the question then becomes whether the disciples would have adopted the language of the third day. For suppose the first appearance of Christ was to Peter, say, a week later as he was fishing in Galilee. Would the believers then say that Jesus was raised on the third day rather than the seventh? Lehmann says yes; for the 'third day' is not meant in any sense as a time indicator, but is a purely theological concept. But were the disciples so speculative? Certainly Luke understands the third day as a time indicator, for he writes 'But on the first day of the week ... That very day ... it is now the third day ... the Christ should suffer and on the third day rise from the dead' (Lk 24. 1, 13, 21, 46). Lehmann and Bode's response is that Luke as a Gentile did not understand the theological significance of the third day, which would have been clear to his Jewish contemporaries, and so mistook it as a time indicator. [47] This cannot but make one feel rather uneasy about Lehmann's hypothesis, for it involves isolating Luke from all his Jewish contemporaries. And I suspect that this dichotomy between historical understanding and theological significance is an import from the twentieth century. The Rabbis cited in the Talmud and Midrash no doubt believed both that the events in question really happened on the third day and that they were theologically significant, for they include in their lists of events that occurred on the third day not only events in which the third day was important theologically (as in the giving of the law) but also events in which the third day was not charged with theological significance (as in Rahab and the spies). There is no reason to think that the New Testament writers did not think Jesus actually rose on the third day; John, for example, certainly seems to take the three day figure as a time indicator by contrasting it with the 46 years it took to build the temple (Jn 2. 20). But in this case, it is doubtful that they would have adopted the language of the third day unless the Easter events really did take place on the third day. This suggests that while the LXX may have provided the language for the dating of the resurrection, the historical events of Easter provided the basis for dating the resurrection. The events of Easter happened on 'the first day of the week', but the language of 'the third day' was adopted because (1) the first day of the week was in fact the third day subsequent to the crucifixion, and (2) the third day in the LXX was a day of climax and of God's deliverance.
I think this is the most likely account of the matter. This means that the phrase 'on the third day' in the formula of I Cor 15 is a time indicator for the events of Easter, including the empty tomb, employing the language of the Old Testament concerning God's acts of deliverance and victory on the third day, perhaps with texts like Jonah 2. 11 and Hos 6. 2 especially in mind. The phrase is, in Liechtenstein's words, a fusion of historical facts plus theological tradition. [48]
There can be little doubt, therefore, that Paul accepted the idea of an empty tomb as a matter of course. But did he know the empty tomb of Jesus? Here we must go outside the confines of I Cor 15 and take a larger view of the historical context in which Paul moved. We know from Paul's own letters that Paul was in Jerusalem three years after his conversion, and that he stayed with Peter two weeks and also spoke with James (Gal 1. 18-19). We know that fourteen years later he was again in Jerusalem and that he ministered with Barnabas in Antioch (Gal 2. 1, I 1). We know that he again was later traveling to Jerusalem with financial relief for the brethren there (Rom 15. 25; 1 Cor 16. 3; 2 Cor 8-9). Furthermore, his letters testify to his correspondence with his various churches, and his personal references make it clear that he had a team of fellow workers like Titus, Timothy, Silas, Aristarchus, Justus, and others who kept him well-informed on the situation in the churches; he also received personal reports from other believers, such as Chloe's people (I Cor 1. 11). Paul knew well not only the aberrations of the churches (Gal; I Cor 15. 29), but also the context of the traditions he delivered (I Cor 11. 23-26). Therefore, if the gospel accounts of the empty tomb embody old traditions concerning its discovery, it is unthinkable that Paul would not know of it. If Mark's narrative contains an old tradition coming out of the Jerusalem community, then Paul would have had to be a recluse not to know of it. This point seems so elementary, but it is somehow usually overlooked by even those who hold that Mark embodies old traditions. If the tradition of the empty tomb is old then somebody would have told Paul about it. But even apart from the Markan tradition, Paul must have known the empty tomb. Paul certainly believed that the grave was empty. Therefore Peter, with whom Paul spoke during those two weeks in Jerusalem, must also have believed the tomb was empty. A Jew could not think otherwise. Therefore, the Christian community also, of which Peter was the leader, must have believed in the empty tomb. But that can only mean that the tomb was empty. For not only would the disciples not believe in a resurrection if the corpse were still in the grave, but they could never have proclaimed the resurrection either under such circumstances. But if the tomb was empty, then it is unthinkable that Paul, being in the city for two weeks six years later and after that often in contact with the Christian community there, should never hear a thing about the empty tomb. Indeed, is it too much to imagine that during his two week stay Paul would want to visit the place where the Lord lay? Ordinary human feelings would suggest such a thing. [49] So I think that it is highly probable that Paul not only accepted the empty tomb, but that he also knew that the actual grave of Jesus was empty.
With this conclusion in hand, we may now proceed to the gospel accounts of the discovery of the empty tomb to see if they supply us with any additional reliable information. Found in all four gospels, the empty tomb narrative shows sure evidence of traditional material in the agreement between the Synoptics and John. It is certain that traditions included that on the first day of the week women, at least Mary Magdalene, came to the tomb early and found the stone taken away; that they saw an angelic appearance; that they informed the disciples, at least Peter, who went, found the tomb empty with the grave clothes lying still in the grave, and returned home puzzled; that the women saw a physical appearance of Jesus shortly thereafter; and that Jesus gave them certain instructions for the disciples. Not all the Synoptics record all these traditions; but John does, and at least one Synoptic confirms each incident; thus, given John's independence from the Synoptics, these incidents are traditional. That is not to say they are historical.
The story of the discovery of the empty tomb was in all likelihood the conclusion or at least part of the pre-Markan passion story. [50] About the only argument against this is the juxtaposition of the lists in Mk 15. 47 and 16. 1, which really affords no grounds for such a conclusion at all. [51] At the very most, this could only force one to explain one or the other as an editorial addition; it would not serve to break off the empty tomb story from the passion narrative. [52] The most telling argument in favor of 16. 1-8's belonging to the passion story is that it is unthinkable that the passion story could end in defeat and death with no mention of the empty tomb or resurrection. As Wilckens has urged, the passion story is incomplete without victory at the end. [53] Confirmation of the inclusion of 16. 1-8 in the pre-Markan passion story is the remarkable correspondence to the course of events described in I Cor 15: died -- was buried -- rose -- appeared; all these elements appear in the pre-Markan passion story, including Christ's appearance (v. 7). Thus, there are strong reasons for taking the empty tomb account as part of the pre-Markan passion story.
Like the burial story, the account of the discovery of the empty tomb is remarkably restrained. Bultmann states, '. . . Mark's presentation is extremely reserved, in so far as the resurrection and the appearance of the risen Lord are not recounted.' [54] Nauck observes that many theological motifs that might be expected are lacking in the story: (1) the proof from prophecy, (2) the in-breaking of the new eon, (3) the ascension of Jesus' Spirit or his descent into hell, (4) the nature of the risen body, and (5) the use of Christological titles. [55] Although kerygmatic speech appears in the mouth of the angel, the fact of the discovery of the empty tomb is not kerygmatically colored. All these factors point to a very old tradition concerning the discovery of the empty tomb.
Mark begins the story by relating that when the Sabbath was past (Saturday night), the women bought spices to anoint the body. The next morning they went to the tomb. The women's intention to anoint the body has caused no end of controversy. It is often assumed that the women were coming to finish the rushed job done by Joseph on Friday evening; John, who has a thorough burial, mentions no intention of anointing. It is often said that the 'Eastern climate' would make it impossible to anoint a corpse after three days. And it would not have violated Sabbath law to anoint a body on the Sabbath, instead of waiting until Sunday (Mishnah Shabbat 23. 5). Besides, the body had been already anointed in advance (Mk 14. 8). And why do the women think of the stone over the entrance only after they are underway? They should have realized the venture was futile.
But what in fact were the women about? There is no indication that they were going to complete a task poorly done. Mark gives no hint of hurry or incompleteness in the burial. That Luke says the women saw 'how' the body was laid (Lk 29. 55) does not imply that the women saw a lack which they wished to remedy; it could mean merely they saw that it was laid in a tomb, not buried, thus making possible a visit to anoint the body. The fact that John does not mention the intention of anointing proves little, since Matthew does not mention it either. So there seems to be no indication that the women were going to complete Jesus' burial. In fact what the women were probably doing is precisely that described in the Mishnah, namely the use of aromatic oils and perfumes that could be rubbed on or simply poured over the body. [56] Even if the corpse had begun to decay, that would not prevent this simple act of devotion by these women. This same devotion could have induced them to go together to open the tomb, despite the stone. (That Mark only mentions the stone here does not mean they had not thought of it before; it serves a literary purpose here to prepare for v. 4). The opening of tombs to allow late visitors to view the body or to check against apparent death was Jewish practice, [57] so the women's intention was not extraordinary. It is true that anointing could be done on the Sabbath, but this was only for a person lying on the death bed in his home, not for a body already wrapped and entombed in a sealed grave outside the city. Blinzler points out that, odd as it may seem, it would have been against the Jewish law even to carry the aromata to the grave site, for this was 'work' (Jer 17. 21-22; Shabbath 8. 1)! [58] Thus, Luke's comment that the women rested on the Sabbath would probably be a correct description. Sometimes it is asserted that Matthew leaves out the anointing motif because he realized one could not anoint a corpse after three days in that climate. But Mark himself, who lived in the [59] Mediterranean climate, would surely also realize this fact, if indeed it be true. Actually, Jerusalem, being 700 metres above sea level, can be quite cool in April; interesting is the entirely incidental detail mentioned by John that at night in Jerusalem at that time it was cold, so much so that the servants and officers of the Jews had made a fire and were standing around it wanning themselves (Jn 18. 18). Add to this the facts that the body, interred Friday evening, had been in the tomb only a night, a day, and a night when the women came to anoint it early Sunday morning, that a rock-hewn tomb in a cliff side would stay naturally cool, and that the body may have already been packed around with aromatic spices, and one can see that the intention to anoint the body cannot in any way be ruled out. [60] The argument that it had been anointed in advance is actually a point in favor of the historicity of this intention, for after 14. 8 Mark would never invent such a superfluous and almost contradictory intention for the women.
The gospels all agree that around dawn the women visited the tomb. Which women? Mark says the two Maries and Salome; Matthew mentions only the two Maries; Luke says the two Maries, Joanna, and other women; John mentions only Mary Magdalene. There seems to be no difficulty in imagining a handful of women going to the tomb. Even John records Mary's words as 'we do not know where they have laid him'(Jn 20. 2). It is true that Semitic usage could permit the first person plural to mean simply 'I' (cf. Jn 3. 11, 32), but not only does this seem rather artificial in this context, but then we would expect the plural as well in v. 13. [61] In any case, this ignores the Synoptic tradition and makes only an isolated grammatical point. When we have independent traditions that women visited the tomb, then the weight of probability falls decisively in favor of Mary's 'we' being the remnant of a tradition of more than one woman. John has perhaps focused on her for dramatic effect.
Arriving at the tomb the women find the stone rolled away. According to the Synoptics the women actually enter the tomb and see an angelic vision. John, however, says Mary Magdalene runs to find Peter and the Beloved Disciple, and only after they come and go from the tomb does she see the angels. Mark's young man is clearly intended to be an angel, as is evident from his white robe and the women's reaction. [62] Although some critics want to regard the angel as a Markan redaction, the exclusion of the angelophany from the pre-Markan passion story is arbitrary, since the earliest Christians certainly believed in the reality of angels and demons and would not hesitate to relate such an account as embodied in vs. 5- 8. [63] And John confirms that there was a tradition of the women's seeing angels at the tomb, especially in light of the fact that he keeps the angels in his account even though their role is oddly superfluous. [64]
Many scholars wish to see v. 7 as a Markan interpolation into the pre-Markan tradition. [65] But the evidence for this seems remarkably weak, in my opinion. [66] The fundamental reason for taking 16. 7 as an insertion is the belief that 14. 28 is an insertion, to which 16. 7 refers. But what is the evidence that 14. 28 is an interpolation? The basic argument is that vs. 27 and 29 read smoothly without it. [67] This, however, is the weakest of reasons for suspecting an insertion (especially since the verses read just as smoothly when v. 28 is left in!), for the fact that a sentence can be dropped out of a context without destroying its flow may be entirely coincidental and no indication that the sentence was not originally part of that context. In fact there are positive reasons for believing 14. 28 is not an insertion. [68] It is futile to object that in 14. 29 Peter only takes offense at v. 27, not v. 28, for of course he objects only to Jesus' telling him they will all fall away, and not to Jesus' promise to go before them (cf. the same pattern in 8. 31-32). On this logic one would have to leave out not only the prediction of the resurrection, but also the striking of the shepherd, since Peter jumps over that as well. There thus seem to be no good reasons to regard 14. 28 as a redactional insertion and positive reasons to see it as firmly welded in place. [69] This means that 16. 7 is also in place in the pre-Markan tradition of the passion story. The content of the verse reveals the knowledge of a resurrection appearance of Christ to the disciples and Peter in Galilee.
Mk 16. 8 has caused a great deal of consternation, not only because it seems to be a very odd note on which to end a book, but also because all the other gospels agree that the women did report to the disciples. But the reaction of fear and awe in the presence of the divine is a typical Markan characteristic. [70] The silence of the women was surely meant just to be temporary, [71] otherwise the account itself could not be part of the pre-Markan passion story.
According to Luke the disciples do not believe the women's report (Lk 24. 11). But Luke and John agree that Peter and at least one other disciple rise and run to the tomb to check it out (Lk 24. 12, 24; Jn 20. 2-10). Although Lk 24. 12 was regarded by Westcott and Hort as a Western noninterpolation, its presence in the later discovered P75 has convinced an increasing number of scholars of its authenticity. That Luke and John share the same tradition isevident not only from the close similarity of Lk 24. 12 to John's account, but also from the fact that Jn 20. 1 most nearly resembles Luke in the number, selection, and order of the elements narrated than any other gospel. [72]
Lk 24. 24 makes it clear that Peter did not go to the tomb alone; John names his companion as the Beloved Disciple. This would suggest that John intends this disciple to be a historical person, and his identification could be correct. [73] The authority of the Beloved Disciple stands behind the gospel as the witness to the accuracy of what is written therein (Jn 21. 24; the verse certainly applies to the gospel as a whole, not just the epilogue, for the whole gospel enjoys the authentication of this revered disciple, not merely a single chapter [74] ), and the identification of his role in the disciples' visit to the empty tomb could be the reminiscence of an eyewitness. So although only Peter was named in the tradition, accompanied by an anonymous disciple, the author of the fourth gospel claimed to know who this unnamed disciple was and identifies him. The Beloved Disciple is portrayed as a real historical person who went with Peter to the empty tomb and whose memories stand behind the fourth gospel as their authentication.
If the Beloved Disciple in chap. 20 is then conceived as a historical person, is his presence an unhistorical, redactional addition? Schnackenburg thinks that few words need to be said to prove that he is an unhistorical addition: in vs. 2, 3 he is easily set aside, the competitive race to the tomb is redactional, v. 9 is in style and content from the evangelist, and v. 9 refers in reality to Mary and Peter. [75] But these considerations do not prove that the Beloved Disciple was not historically present, but only that he was not mentioned in the particular tradition. That could have been proved from Lk 24. 12 alone. What I am suggesting is that the reminiscences of the Beloved Disciple are employed by the evangelist to supplement and fill out his tradition. Thus the first three considerations ought not to surprise us. Indeed, the third consideration supports the fact that the Beloved Disciple's role here was not added later to the gospel by any supposed editor who tacked on chap. 21. That hon ephilei instead of hon egapa is used in v. 2 also indicates that the evangelist himself wrote these words and not a later redactor. In fact the unity and continuity of vs. 2-10 preclude that the evangelist wrote only of Peter and Mary's visit and that the Beloved Disciple was artfully inserted by a later editor. Lk 24. 24 reveals that Peter did not go to the tomb alone, so one cannot exclude that the Beloved Disciple went with him. As for v. 9, it plainly refers to the disciples in v. 10 (Mary is not even mentioned after v. 2) and is not part of the pre- Johannine tradition, being typical for John (cf. 2. 22; 12. 16). Thus, the evangelist, who knew the Beloved Disciple and wrote on the basis of his memories, includes his part in these events. If it be said that the evangelist simply invented the figure of the Beloved Disciple, 21. 24 becomes a deliberate falsehood, the close affinities between chaps. 1-20 and 21 are ignored, it becomes difficult to explain how then the person of the Beloved Disciple should come to exist and why he is inserted in the narratives, and the widespread concern over his death becomes unintelligible. The evangelist and the gospel certainly stem out of the same circle that appended chap. 21 and adds its signature in 21. 24c. Therefore, it seems to me, the role of the Beloved Disciple in 20. 2-10 can only be that of a historical participant whose memories fill out the tradition received. There seems to be no plausible way of denying the historicity of the Beloved Disciple's role in the visit to the empty tomb. [76]
It might be urged against the historicity of the disciples' visit to the tomb that the disciples had fled Friday night to Galilee and so were not present in Jerusalem. But not only does Mk 14. 50 not contemplate this, but it seems unreasonable to think that the disciples, fleeing from the garden, would return to where they were staying, grab their things, and keep on going all the way back to Galilee. And scholars who support such a flight must prove that the denial of Peter is unhistorical, since it presupposes the presence of the disciples in Jerusalem. But there is no reason to regard this tradition, attested in all four gospels, as unhistorical. [77] In its favor is the fact that it is improbable that the early Christians should invent a tale concerning the apostasy of the man who was their leader.
Sometimes it is said that the disciples could not have been in Jerusalem, since they are not mentioned in the trial, execution, or burial stories. But could it not be that the disciples were hiding for fear of the Jews, just as the gospels indicate? There is no reason why the passion story would want to portray the church's leaders as cowering in seclusion while only the women dared to venture about openly, were this not historical; the disciples could have been made to flee to Galilee while the women stayed behind. This would even have had the advantage of making the appearances unexpected by keeping the empty tomb unknown to the disciples. But, no, the pre-Markan passion story says, 'But go, tell his disciples and Peter that he is going before you to Galilee; there you will see him . . .'(Mk 16. 7). So the disciples were probably in Jerusalem, but lying low. Besides this, it is not true that the disciples are missing entirely from the scene. All the gospels record the denial of Peter while the trial of Jesus was proceeding; John adds that there was another disciple with him, perhaps the Beloved Disciple (Jn 18. 15). According to Luke, at the execution of Jesus, 'all his acquaintances ... stood at a distance and saw these things' (Lk 23. 49). John says that the Beloved Disciple was at the cross with Jesus' mother and bore witness to what happened there (Jn 19. 26-27, 35). Attempts to interpret the Beloved Disciple as a symbol here or to lend a purely theological meaning to the passage are less than convincing. So it is not true that the disciples are completely absent during the low point in the course of events prior to the resurrection. There are therefore a good number of traditions that the disciples were in Jerusalem during the weekend; that at least two of them visited the tomb cannot therefore be excluded.
It is often asserted that the story of the disciples' visit to the tomb is an apologetic development designed to shore up the weak witness of the women. Not only does there seem to be no proof for this, but against it stand the traditions that the disciples were in Jerusalem. For if the women did find the tomb empty on Sunday morning, and reported this to the disciples, then it is implausible that the disciples would sit idly by not caring to check out the women's news. That one or two of them should run back to the tomb with the women, even if only to satisfy their doubts that the women were mistaken, is very likely. Hence, attempts to dismiss the empty tomb narratives as unhistorical legends are not only insufficiently supported by the evidence, but contain positive implausibilities.
Having examined the testimony of Paul and the gospels concerning the empty tomb of Jesus, what is the evidence in favor of its historicity?
1. Paul's testimony implies the historicity of the empty tomb. Few facts could be more certain than that Paul at least believed in the empty tomb. But the question now presses, how is it historically possible for the apostle Paul to have presupposed so confidently the empty tomb of Jesus if in fact the tomb were not empty? Paul was in Jerusalem six years after the events themselves. The tomb must have been empty by then. But more than that, Peter, James, and the other Christians in Jerusalem with whom Paul spoke must have also accepted that the tomb was found empty at the resurrection. It would have been impossible for the resurrection faith to survive in face of a tomb containing the corpse of Jesus. The disciples could not have adhered to the resurrection; even if they had, scarcely any one would have believed them; and their Jewish opponents could have exposed the whole affair as a poor joke by displaying the body of Jesus. Moreover, all this aside, had the tomb not been empty, then Christian theology would have taken an entirely different route than it did, trying to explain how resurrection could still be possible, though the body remained in the grave. But neither Christian theology nor apologetics ever had to face such a problem. It seems inconceivable that Pauline theology concerning the bodily resurrection could have taken the direction that it did had the tomb not been empty from the start. But furthermore, we have observed that the 'he was raised' in the formula corresponds to the empty tomb periocope in the gospels, the egegertai mirroring the egerthe. This makes it likely that the empty tomb tradition stands behind the third element of the formula, just as the burial tradition stands behind the second. Two conclusions follow. First, the tradition that the tomb was found empty must be reliable. For time was insufficient for legend to accrue, and the presence of the women witnesses in the Urgemeinde would prevent it. Second, Paul no doubt knew the tradition of the empty tomb and thus lends his testimony to its reliability. If the discovery of the empty tomb is not historical then it seems virtually inexplicable how both Paul and the early formula could accept it.
2. The presence of the empty tomb pericope in the pre-Markan passion story supports its historicity. The empty tomb story was part of, perhaps the close of, the pre-Markan passion story. According to Pesch, [78] geographical references, personal names, and the use of Galilee as a horizon all point to Jerusalem as the fount of the pre-Markan passion story. As to its age, Paul's Last Supper tradition (I Cor 11. 23-25) presupposes the pre-Markan passion account; therefore, the latter must have originated in the first years of existence of the Jerusalem Urgemeinde. Confirmation of this is found in the fact that the pre-Markan passion story speaks of the 'high priest' without using his name (14. 53, 54, 60, 61, 63). This implies (nearly necessitates, according to Pesch) that Caiaphas was still the high priest when the pre-Markan passion story was being told, since then there would be no need to mention his name. Since Caiaphas was high priest from A.D. 18-37, the terminus ante quem for the origin of the tradition is A.D. 37. Now if this is the case, then any attempt to construe the empty tomb account as an unhistorical legend is doomed to failure. It is astounding that Pesch himself can try to convince us that the pre-Markan empty tomb story is a fusion of three Gattungen from the history of religions: door-opening miracles, epiphany stories, and stories of seeking but not finding persons who have been raised from the dead! [79] On the contrary: given the age (even if not as old as Pesch argues) and the vicinity of origin of the pre-Markan passion story, it seems more plausible to regard the empty tomb story as substantially accurate historically.
3. The use of 'the first day of the week' instead of 'on the third day' points to the primitiveness of the tradition. The tradition of the discovery of the empty tomb must be very old and very primitive because it lacks altogether the third day motif prominent in the kerygma, which is itself extremely old, as evident by its appearance in I Cor 15. 4. If the empty tomb narrative were a late and legendary account, then it could hardly have avoided being cast in the prominent, ancient, and accepted third day motif. [80] This can only mean that the empty tomb tradition ante-dates the third day motif itself. Again, the proximity of the tradition to the events themselves makes it idle to regard the empty tomb as a legend. It makes it highly probable that on the first day of the week the tomb was indeed found empty.
4. The nature of the narrative itself is theologically unadorned and nonapologetic. The resurrection is not described, and we have noted the lack of later theological motifs that a late legend might be expected to contain. This suggests the account is primitive and factual, even if dramatization occurs in the role of the angel. Very often contemporary theologians urge that the empty tomb is not a historical proof for the resurrection because for the disciples it was in itself ambiguous and not a proof. But that is precisely why the empty tomb story is today so credible: because it was not an apologetic device of early Christians; it was, as Wilckens nicely puts it, 'a trophy of God's victory'. [81] The very fact that they saw in it no proof ensures that the narrative is substantially uncolored by apologetic motifs and in its primitive form.
5. The discovery of the tomb by women is highly probable. Given the low status of women in Jewish society and their lack of qualification to serve as legal witnesses, [82] the most plausible explanation, in light of the gospels' conviction that the disciples were in Jerusalem over the weekend, why women and not the male disciples were made discoverers of the empty tomb is that the women were in fact the ones who made this discovery. This conclusion is confirmed by the fact that there is no reason why the later Christian church should wish to humiliate its leaders by having them hiding in cowardice in Jerusalem, while the women boldly carry out their last devotions to Jesus' body, unless this were in fact the truth. Their motive of anointing the body by pouring oils over it is entirely plausible; indeed, its apparent conflict with Mk 14. 8 makes it historically probable that this was the reason why the women went to the tomb. Furthermore, the listing of the women's names again precludes unhistorical legend at the story's core, for these persons were known in the Urgemeinde and so could not be associated with a false account.
6. The investigation of the empty tomb by the disciples is historically probable. Behind the fourth gospel stands the Beloved Disciple, whose reminiscences fill out the traditions employed. The visit of the disciples to the empty tomb is therefore attested not only in tradition but by this disciple. His testimony has therefore the same first hand character as Paul's and ought to be accepted as equally reliable. The historicity of the disciples' visit is also made likely by the plausibility of the denial of Peter tradition, for if he was in Jerusalem, then having heard the women's report he would quite likely check it out. The inherent implausibility of and absence of any evidence for the disciples' flight to Galilee render it highly likely that they were in Jerusalem, which fact makes the visit to the tomb also likely.
7. It would have been impossible for the disciples to proclaim the resurrection in Jerusalem had the tomb not been empty. The empty tomb is a sine qua non of the resurrection. The notion that Jesus rose from the dead with a new body while his old body lay in the grave is a purely modern conception. Jewish mentality would never have accepted a division of two bodies, one in the tomb and one in the risen life. [83] When therefore the disciples began to preach the resurrection in Jerusalem, and people responded, and the religious authorities stood helplessly by, the tomb must have been empty. The fact that the Christian fellowship, founded on belief in Jesus' resurrection, could come into existence and flourish in the very city where he was executed and buried seems to be compelling evidence for the historicity of the empty tomb.
8. The Jewish polemic presupposes the empty tomb. From Matthew's story of the guard at the tomb (Mt. 27. 62-66; 28. 11-15), which was aimed at refuting the widespread Jewish allegation that the disciples had stolen Jesus' body, we know that the disciples' Jewish opponents did not deny that Jesus' tomb was empty. When the disciples began to preach that Jesus was risen, the Jews responded with the charge that the disciples had taken away his body, to which the Christians retorted that the guard would have prevented any such theft. The Jews then asserted that the guard had fallen asleep and that the disciples stole the body while the guard slept. The Christian answer was that the Jews had bribed the guard to say this, and so the controversy stood at the time of Matthew's writing. The whole polemic presupposes the empty tomb. Mahoney's objection, that the Matthaean narrative presupposes only the preaching of the resurrection, and that the Jews argued as they did only because it would have been 'colorless' to say the tomb was unknown or lost, fails to perceive the true force of the argument. [84] The point is that the Jews did not respond to the preaching of the resurrection by pointing to the tomb of Jesus or exhibiting his corpse, but entangled themselves in a hopeless series of absurdities trying to explain away his empty tomb. The fact that the enemies of Christianity felt obliged to explain away the empty tomb by the theft hypothesis shows not only that the tomb was known (confirmation of the burial story), but that it was empty. (Oddly enough, Mahoney contradicts himself when he later asserts that it was more promising for the Jews to make fools of the disciples through the gardener-misplaced-the-body theory than to make them clever hoaxers through the theft hypothesis. [85] So it was not apparently the fear of being 'colorless' that induced the Jewish authorities to resort to the desperate expedient of the theft hypothesis.) The proclamation 'He is risen from the dead' (Mt. 27.64) prompted the Jews to respond, 'His disciples ... stole him away' (Mt. 28. 13). Why? The most probable answer is that they could not deny that his tomb was empty and had to come up with an alternative explanation. So they said the disciples stole the body, and from there the polemic began. Even the gardener hypothesis is an attempt to explain away the empty tomb. The fact that the Jewish polemic never denied that Jesus' tomb was empty, but only tried to explain it away is compelling evidence that the tomb was in fact empty.
Taken together these eight considerations furnish powerful evidence that the tomb of Jesus was actually found empty on Sunday morning by a small group of his women followers. As a plain historical fact this seems to be amply attested. As Van Daalen has remarked, it is extremely difficult to object to the fact of the empty tomb on historical grounds; most objectors do so on the basis of theological or philosophical considerations. [86] But these, of course, cannot change historical fact. And, interestingly, more and more New Testament scholars seem to be realizing this fact; for today, many, if not most, exegetes would defend the historicity of the empty tomb of Jesus, and their number continues to increase. [87]
See Rudolph Bultmann, 'Neues Testament und Mythologie', in Kerygma und Mythos 1, ed. Hans-Werner Bartsch, 5th ed., TF I (Hamburg: Herbert Reich, 1967) 44-8. Very typical is R. H. Fuller's characterization of the resurrection as a 'meta-historical event' (R. H. Fuller, The Formation of the Resurrection Narratives [London: SPCK, 1972] 23), a phrase which is actually a self-contradiction, since an event is that which happens and so is ipso facto a part of history. Robinson rightly scores Fuller's disclaimers that this 'meta-historical event' left only a negative mark on history: 'Yet the negative mark, by which he evidently means not simply that there was nothing to show for it but that there was nothing to show for it (i.e. an empty tomb), is 'within history'and must therefore be patient of historical inquiry.' (J. A. T. Robinson, The Human Face of God [London: SCM, 1973] 136.)
See Rudolph Bultmann, 'Neues Testament und Mythologie', in Kerygma und Mythos 1, ed. Hans-Werner Bartsch, 5th ed., TF I (Hamburg: Herbert Reich, 1967) 44-8. Very typical is R. H. Fuller's characterization of the resurrection as a 'meta-historical event' (R. H. Fuller, The Formation of the Resurrection Narratives [London: SPCK, 1972] 23), a phrase which is actually a self-contradiction, since an event is that which happens and so is ipso facto a part of history. Robinson rightly scores Fuller's disclaimers that this 'meta-historical event' left only a negative mark on history: 'Yet the negative mark, by which he evidently means not simply that there was nothing to show for it but that there was nothing to show for it (i.e. an empty tomb), is 'within history'and must therefore be patient of historical inquiry.' (J. A. T. Robinson, The Human Face of God [London: SCM, 1973] 136.)
Grass argues that even if Paul held that the old body would be raised transformed, that does not guarantee that Paul knew of Jesus' empty tomb. It would only show that he would have believed it to be so on dogmatic grounds. (Grass, Ostergeschehen, 172.)
Grass argues that even if Paul held that the old body would be raised transformed, that does not guarantee that Paul knew of Jesus' empty tomb. It would only show that he would have believed it to be so on dogmatic grounds. (Grass, Ostergeschehen, 172.)
See the excellent study by Karl Bornhäuser, Die Gebeine der Toten, BFCT 26 (Gütersloh: C. Bertelsmann, 1921). Some critics acknowledge the accuracy of Bornhäuser's exposition of resurrection in the Old Testament, but brush it aside with a word, that the New Testament knows nothing of such a conception. They ignore his clear statement that what is here most important is not what is said in the New Testament, but what is presupposed by the New Testament. (Ibid., 6.) Bornhäuser's thesis is that in the Old Testament the grave is the place where the corpse decays but the bones remain and rest until the resurrection, at which they are raised. There is no Auferweckung of the soul, nor even of the flesh; it is much more, properly speaking, an Auferstehung andAuferweckung of the bones. (Ibid., 26.) The New Testament presupposes this same conception. Mt. 23. 27; Jn 5. 28 show that Jesus regarded the tomb as the place where the bones are, which would be raised at the resurrection. Paul's terminology is thoroughly Pharisaic; it should never have come to be, states Bornhäuser, the 'he was raised' should be understood as anything other than the resurrection from the grave. (Ibid., 33.) Phil 1. 23; 2 Cor 5. 8 show clearly that for Paul it is not the spirit that is asleep in death. When he says that those who are asleep will rise at the last trumpet (1 Thess 4. 13-17), he means the dead in the graves. Thus, the grave would have to be empty after the resurrection. (See also Hastings' Encyclopaedia of Religion and Ethics, s.v. 'Bones', by H. Wheeler Robinson; Joseph Bonsirven, Le Judaisme palestinien au temps de Jesus Christ, 2 vols. [Paris: Beauchesne, 1934] 1: 484; Künneth, Theology, 94.)
See the excellent study by Karl Bornhäuser, Die Gebeine der Toten, BFCT 26 (Gütersloh: C. Bertelsmann, 1921). Some critics acknowledge the accuracy of Bornhäuser's exposition of resurrection in the Old Testament, but brush it aside with a word, that the New Testament knows nothing of such a conception. They ignore his clear statement that what is here most important is not what is said in the New Testament, but what is presupposed by the New Testament. (Ibid., 6.) Bornhäuser's thesis is that in the Old Testament the grave is the place where the corpse decays but the bones remain and rest until the resurrection, at which they are raised. There is no Auferweckung of the soul, nor even of the flesh; it is much more, properly speaking, an Auferstehung andAuferweckung of the bones. (Ibid., 26.) The New Testament presupposes this same conception. Mt. 23. 27; Jn 5. 28 show that Jesus regarded the tomb as the place where the bones are, which would be raised at the resurrection. Paul's terminology is thoroughly Pharisaic; it should never have come to be, states Bornhäuser, the 'he was raised' should be understood as anything other than the resurrection from the grave. (Ibid., 33.) Phil 1. 23; 2 Cor 5. 8 show clearly that for Paul it is not the spirit that is asleep in death. When he says that those who are asleep will rise at the last trumpet (1 Thess 4. 13-17), he means the dead in the graves. Thus, the grave would have to be empty after the resurrection. (See also Hastings' Encyclopaedia of Religion and Ethics, s.v. 'Bones', by H. Wheeler Robinson; Joseph Bonsirven, Le Judaisme palestinien au temps de Jesus Christ, 2 vols. [Paris: Beauchesne, 1934] 1: 484; Künneth, Theology, 94.)
Rengstorf, Auferstehung, 62. Comments Ellis: 'it is very unlikely that the earliest Palestinian Christians could conceive of any distinction between resurrection and physical, "grave-emptying" resurrection. To them an anastasis (resurrection) without an empty grave would have been about as meaningful as a square circle.' (E. Earle Ellis, ed., The Gospel of Luke, NCB [London: Nelson, 1966] 273.) See also Moule, Significance, 9.
Rengstorf, Auferstehung, 62. Comments Ellis: 'it is very unlikely that the earliest Palestinian Christians could conceive of any distinction between resurrection and physical, "grave-emptying" resurrection. To them an anastasis (resurrection) without an empty grave would have been about as meaningful as a square circle.' (E. Earle Ellis, ed., The Gospel of Luke, NCB [London: Nelson, 1966] 273.) See also Moule, Significance, 9.
The mention of the empty tomb would not pass well with the structure and rhythm of the formula in any case, since the subject of each sentence is Christos and the empty tomb is not something that Christ did.
The mention of the empty tomb would not pass well with the structure and rhythm of the formula in any case, since the subject of each sentence is Christos and the empty tomb is not something that Christ did.
As von Campenhausen urges, the detail 'on the third day' must have a biblical counterpart to warrant its inclusion, but the Scripture passages are so vague that the third day must have been somehow already given before it could be discovered in the Old Testament. (Von Campenhausen, Ablauf, 11-12.) So also Michael Ramsey, The Resurrection of Christ (London: Centenary Press, 1945) 25; C. F. D. Moule, The Birth of the New Testament, 2nd ed. rev. London: Adam & Charles Black, 1966, 84-5; Barrett, First Epistle, 340.
As von Campenhausen urges, the detail 'on the third day' must have a biblical counterpart to warrant its inclusion, but the Scripture passages are so vague that the third day must have been somehow already given before it could be discovered in the Old Testament. (Von Campenhausen, Ablauf, 11-12.) So also Michael Ramsey, The Resurrection of Christ (London: Centenary Press, 1945) 25; C. F. D. Moule, The Birth of the New Testament, 2nd ed. rev. London: Adam & Charles Black, 1966, 84-5; Barrett, First Epistle, 340.
Actually if Paul was in Jerusalem prior to his trip to Damascus, as Acts reports, then he probably would have heard of the empty tomb then, not, indeed, from the Christians, but from the Jewish authorities in whose employ he was. For even if the Christians in their enthusiasm had not checked to see if the tomb of Jesus was empty, the Jewish authorities could be guilty of no such oversight. So ironically Paul may have known of the empty tomb even before his conversion.
Actually if Paul was in Jerusalem prior to his trip to Damascus, as Acts reports, then he probably would have heard of the empty tomb then, not, indeed, from the Christians, but from the Jewish authorities in whose employ he was. For even if the Christians in their enthusiasm had not checked to see if the tomb of Jesus was empty, the Jewish authorities could be guilty of no such oversight. So ironically Paul may have known of the empty tomb even before his conversion.
Mk 15. 40-41, which first names the women, cannot be an independent piece of tradition, since it makes sense only in its context. But neither can these verses be editorially constructed out of 15. 47 and 16. 1 because then the appellation 'the younger' is inexplicable, as is the fusion of what would normally designate the wife of James and the wife of Joses into one woman, the mother of James and Joses. But if 15. 40-41 are part of the pre-Markan tradition, then so are probably 15. 47 and 16. 1. For rather than repeat the long identification of Mary in 15. 40, the tradition names her by one son in 15. 47 and the other in 16. 1; thus 15. 47 and 16. 1 actually presuppose each other's existence. And their juxtaposition is by no means a useless duplication: the omission and re-introduction of Salome's name suggests that the witnesses to the crucifixion, burial, and empty tomb are being recalled here.
Mk 15. 40-41, which first names the women, cannot be an independent piece of tradition, since it makes sense only in its context. But neither can these verses be editorially constructed out of 15. 47 and 16. 1 because then the appellation 'the younger' is inexplicable, as is the fusion of what would normally designate the wife of James and the wife of Joses into one woman, the mother of James and Joses. But if 15. 40-41 are part of the pre-Markan tradition, then so are probably 15. 47 and 16. 1. For rather than repeat the long identification of Mary in 15. 40, the tradition names her by one son in 15. 47 and the other in 16. 1; thus 15. 47 and 16. 1 actually presuppose each other's existence. And their juxtaposition is by no means a useless duplication: the omission and re-introduction of Salome's name suggests that the witnesses to the crucifixion, burial, and empty tomb are being recalled here.
Thus Wilckens argues that 16. 1 is a later addition designed to protect the women against the charge of breaking the Sabbath. Originally 16. 2-6a was the close of the Passion story. (Wilckens , Auferstehung, 56-63.) For a critique of Wilckens' hypothesis see Josef Blinzler, 'Die Grablegung Jesu in historisher Sicht', in Resurrexit, ed. Dhanis, 65-6. Blinzler argues that all the lists are old and unchanged. (Ibid., 65-8.)
Thus Wilckens argues that 16. 1 is a later addition designed to protect the women against the charge of breaking the Sabbath. Originally 16. 2-6a was the close of the Passion story. (Wilckens , Auferstehung, 56-63.) For a critique of Wilckens' hypothesis see Josef Blinzler, 'Die Grablegung Jesu in historisher Sicht', in Resurrexit, ed. Dhanis, 65-6. Blinzler argues that all the lists are old and unchanged. (Ibid., 65-8.)
Wilckens, Auferstehung, 61. The passion story could not have ended with the death and burial of Jesus without assurance of victory; the discovery of the empty tomb by the women was part of the passion story. (Brown, John, 978; Blinzler, 'Grablegung', 76; Rudolf Schnackenburg, Das Johannesevangelium, 3 vols., 2nd ed., HTKNT 4 [Freiburg: Herder, 1976] 3: 353.)
Wilckens, Auferstehung, 61. The passion story could not have ended with the death and burial of Jesus without assurance of victory; the discovery of the empty tomb by the women was part of the passion story. (Brown, John, 978; Blinzler, 'Grablegung', 76; Rudolf Schnackenburg, Das Johannesevangelium, 3 vols., 2nd ed., HTKNT 4 [Freiburg: Herder, 1976] 3: 353.)
Nauck, 'Bedeutung', 243-67. According to Kremer, every theological reflection on the meaning of the resurrection is lacking, so the tradition must come from a very early time. For its origin in Palestine (Jerusalem) counts not only the interest in the empty tomb itself, but also the names of the women and the Semitic te mia ton sabbaton (cf. prote sabbatou [16. 9]; 'after three days' [ 8. 31; 9. 31; 10. 34]). (Kremer, "'Grab"', 153.)
Nauck, 'Bedeutung', 243-67. According to Kremer, every theological reflection on the meaning of the resurrection is lacking, so the tradition must come from a very early time. For its origin in Palestine (Jerusalem) counts not only the interest in the empty tomb itself, but also the names of the women and the Semitic te mia ton sabbaton (cf. prote sabbatou [16. 9]; 'after three days' [ 8. 31; 9. 31; 10. 34]). (Kremer, "'Grab"', 153.)
On neaniskos as an angel, cf. 2 Macc 3. 26, 33; Lk 24, 4; Gospel of Peter 9; Josephus Antiquities of the Jews 5.277. The white robe is traditional for angels (cf. Rev 9. 13; 10. 1). In Mark fear and awe are the typical responses to the divine. The other gospels understood Mark's figure as an angel.
On neaniskos as an angel, cf. 2 Macc 3. 26, 33; Lk 24, 4; Gospel of Peter 9; Josephus Antiquities of the Jews 5.277. The white robe is traditional for angels (cf. Rev 9. 13; 10. 1). In Mark fear and awe are the typical responses to the divine. The other gospels understood Mark's figure as an angel.
It is highly unlikely that the pre-Markan tradition lacked the angel, for the climax of the story comes with his words in vs. 5-6 and without him the tomb is ambiguous in its meaning. (Ulrich Wilckens, 'Die Perikope vom leeren Grabe Jesu in der nachmarkinischen Traditionsgeschichte', in Festschrift für Friedrich Smend [Berlin: Merseburger, 1963] 32; Schenke, Grab, 69-71; John E. Alsup, The Post-Resurrection Appearance Stories of the Gospel Tradition, CTM A5 [Stuttgart: Calwer Verlag, 1975] 92-3; Kremer, Osterevangelien, 45-7.)
It is highly unlikely that the pre-Markan tradition lacked the angel, for the climax of the story comes with his words in vs. 5-6 and without him the tomb is ambiguous in its meaning. (Ulrich Wilckens, 'Die Perikope vom leeren Grabe Jesu in der nachmarkinischen Traditionsgeschichte', in Festschrift für Friedrich Smend [Berlin: Merseburger, 1963] 32; Schenke, Grab, 69-71; John E. Alsup, The Post-Resurrection Appearance Stories of the Gospel Tradition, CTM A5 [Stuttgart: Calwer Verlag, 1975] 92-3; Kremer, Osterevangelien, 45-7.)
For example, Schenke's troop of objections against v. 7: (1) it introduces a thought independent of v. 6; (2) egerthe is not mentioned further; (3) 14. 28 is an insertion; (4) v. 7 does not correspond with the women's reaction; (5) v. 7 introduces the apostles and switches to direct speech. (Schenke, Grab, 43-7.) Except for (3) these hardly merit refutation. V. 7 introduces a thought no more independent of v. 6 than v. 6b of v. 6a. There is no need to mention further the resurrection; having been raised, Jesus is going before the disciples to Galilee. Given Mark's theology, the women's reaction is typical. The introduction of the apostles says nothing for v. 7's being an insertion, nor does direct or indirect speech,
For example, Schenke's troop of objections against v. 7: (1) it introduces a thought independent of v. 6; (2) egerthe is not mentioned further; (3) 14. 28 is an insertion; (4) v. 7 does not correspond with the women's reaction; (5) v. 7 introduces the apostles and switches to direct speech. (Schenke, Grab, 43-7.) Except for (3) these hardly merit refutation. V. 7 introduces a thought no more independent of v. 6 than v. 6b of v. 6a. There is no need to mention further the resurrection; having been raised, Jesus is going before the disciples to Galilee. Given Mark's theology, the women's reaction is typical. The introduction of the apostles says nothing for v. 7's being an insertion, nor does direct or indirect speech,
It is sometimes urged that the Fayum Gospel Fragment, a third century compilation from the gospels which omits v. 28, testifies to a tradition lacking this verse. (Walter Grundmann, Das Evangelium nach Markus, 7th rev. ed., THKNT 2 [Berlin: Evangelische Verlagstanstalt, 1977] 395.) But as a compilation the fragment by its very nature omits material and is no evidence for the absence of v. 28 in the passion tradition. See M. J. Lagrange, L'Evangile selon saint Marc (Paris: Librairie Lecoffre, 1966) 383; Lane, Mark, 510; Pesch, Markusevangelium, 2: 381.
It is sometimes urged that the Fayum Gospel Fragment, a third century compilation from the gospels which omits v. 28, testifies to a tradition lacking this verse. (Walter Grundmann, Das Evangelium nach Markus, 7th rev. ed., THKNT 2 [Berlin: Evangelische Verlagstanstalt, 1977] 395.) But as a compilation the fragment by its very nature omits material and is no evidence for the absence of v. 28 in the passion tradition. See M. J. Lagrange, L'Evangile selon saint Marc (Paris: Librairie Lecoffre, 1966) 383; Lane, Mark, 510; Pesch, Markusevangelium, 2: 381.
So C. F. D. Moule, 'St. Mark xvi.8 once more', NTS 2 (1955-6) 58-9; Dhanis, 'Ensevelissement', 389; C. E. B. Cranfield, The Gospel according to Saint Mark, CGTC (Cambridge: Cambridge University Press, 1963) 469; Lagrange, Marc, 448; 1. Howard Marshall, The Gospelof Luke, NIGTC (Exeter: Paternoster Press, 1978) 887. See the helpful discussion of the women's silence in Bode, Easter, 39-44. He distinguishes five possible interpretations: (1) The silence explains why the legend of the empty tomb remained so long unknown. (2) The silence is an instance of Mark's Messianic secret motif. (3) The silence was temporary. (4) The silence served the apologetic purpose of separating the apostles from the empty tomb. (5) The silence is the paradoxical human reaction to divine commands as understood by Mark. But (1) is now widely rejected as implausible, since the empty tomb story is a pre-Markan tradition. (2) is inappropriate in the post-resurrection period when Jesus may be proclaimed as the Messiah. As for (4), there is no evidence that the silence was designed to separate the apostles from the tomb. Mark does not hold that the disciples had fled back to Galilee independently of the women. So there is no implication that the disciples saw Jesus without having heard of the empty tomb. It is pointless to speak of 'apologetics' when Mark does not even imply that the disciples went to Galilee and saw Jesus without hearing the women's message, much less draw some triumphant apologetic conclusion as a result of this. In fact there were also traditions that the disciples did visit the tomb, after the women told them of their discovery, but Mark breaks off his story before that point. As for (5) this solution is entirely too subtle, drawing the conclusion that because people talked when Jesus told them not to, therefore, the women, having been told to talk, did not. Therefore (3) is most probable. The fear and silence are Markan motifs of divine encounter and were not meant to imply an enduring silence.
So C. F. D. Moule, 'St. Mark xvi.8 once more', NTS 2 (1955-6) 58-9; Dhanis, 'Ensevelissement', 389; C. E. B. Cranfield, The Gospel according to Saint Mark, CGTC (Cambridge: Cambridge University Press, 1963) 469; Lagrange, Marc, 448; 1. Howard Marshall, The Gospelof Luke, NIGTC (Exeter: Paternoster Press, 1978) 887. See the helpful discussion of the women's silence in Bode, Easter, 39-44. He distinguishes five possible interpretations: (1) The silence explains why the legend of the empty tomb remained so long unknown. (2) The silence is an instance of Mark's Messianic secret motif. (3) The silence was temporary. (4) The silence served the apologetic purpose of separating the apostles from the empty tomb. (5) The silence is the paradoxical human reaction to divine commands as understood by Mark. But (1) is now widely rejected as implausible, since the empty tomb story is a pre-Markan tradition. (2) is inappropriate in the post-resurrection period when Jesus may be proclaimed as the Messiah. As for (4), there is no evidence that the silence was designed to separate the apostles from the tomb. Mark does not hold that the disciples had fled back to Galilee independently of the women. So there is no implication that the disciples saw Jesus without having heard of the empty tomb. It is pointless to speak of 'apologetics' when Mark does not even imply that the disciples went to Galilee and saw Jesus without hearing the women's message, much less draw some triumphant apologetic conclusion as a result of this. In fact there were also traditions that the disciples did visit the tomb, after the women told them of their discovery, but Mark breaks off his story before that point. As for (5) this solution is entirely too subtle, drawing the conclusion that because people talked when Jesus told them not to, therefore, the women, having been told to talk, did not. Therefore (3) is most probable. The fear and silence are Markan motifs of divine encounter and were not meant to imply an enduring silence.
I find it implausible either that the Beloved Disciple should have lied to his students that he was there when he was not or that the entire Johannine community should lie in asserting that their master had taken part in certain historical events when they know he had not. See excellent comments by Brown, John, 1127-9.
I find it implausible either that the Beloved Disciple should have lied to his students that he was there when he was not or that the entire Johannine community should lie in asserting that their master had taken part in certain historical events when they know he had not. See excellent comments by Brown, John, 1127-9.
So Brown, John, 840-1: 983; Kremer, "'Grab"', 158. Von Campenhausen, Ablauf, 44-5, also maintains the presence of the disciples in Jerusalem, but his view that Peter, inspired by the empty tomb, led the disciples back to Galilee to see Jesus fails in light of the traditions that the empty tomb did not awaken faith and is predicated on a doubtful interpretation of Lk 22. 31, which says nothing about Peter's convincing the others to believe that Jesus was risen.
So Brown, John, 840-1: 983; Kremer, "'Grab"', 158. Von Campenhausen, Ablauf, 44-5, also maintains the presence of the disciples in Jerusalem, but his view that Peter, inspired by the empty tomb, led the disciples back to Galilee to see Jesus fails in light of the traditions that the empty tomb did not awaken faith and is predicated on a doubtful interpretation of Lk 22. 31, which says nothing about Peter's convincing the others to believe that Jesus was risen.
Ibid., 2: 522-36. Pesch thinks the stone's being rolled away is the product of door-opening miracle stories. When it is pointed out that no such door-opening is narrated in Mark, Pesch gives away his case by asserting that it is a 'latent' door-opening miracle! The angelic appearance he attributes to epiphany stories, though without showing the parallels. Finally, he appeals to a Gattung for seeking, but not finding someone for the search for Jesus' body, adducing several unclear OT texts (e.g. 2 Kings 2. 16-18; Ps 37. 36; Ez 26. 21) plus a spate of post-Christian or Christian-influenced sources (Gospel of Nicodemus 16. 6; Testament of Job 39-40) and even question-begging texts from the New Testament itself. He uncritically accepts Lehmann and MacArthur's analysis of the third day motif, which he equates with Mark's phrase 'on the first day'! His assertion that the fact that the women were known in the Urgemeinde cannot prevent legend since many legends are attested about the disciples is a petitio principii. He fails to come to grips with his own early dating and never shows how legend could develop in so short a span in the presence of those who knew better. For a critique of Pesch's position as well as a timely warning against New Testament exegesis's falling into the fallacies of the old history of religions school, see Peter Stuhlmacher, "'Kritischer müssten mir die Historisch-Kritischen sein!"', TQ 153 (1973) 244-51.
Ibid., 2: 522-36. Pesch thinks the stone's being rolled away is the product of door-opening miracle stories. When it is pointed out that no such door-opening is narrated in Mark, Pesch gives away his case by asserting that it is a 'latent' door-opening miracle! The angelic appearance he attributes to epiphany stories, though without showing the parallels. Finally, he appeals to a Gattung for seeking, but not finding someone for the search for Jesus' body, adducing several unclear OT texts (e.g. 2 Kings 2. 16-18; Ps 37. 36; Ez 26. 21) plus a spate of post-Christian or Christian-influenced sources (Gospel of Nicodemus 16. 6; Testament of Job 39-40) and even question-begging texts from the New Testament itself. He uncritically accepts Lehmann and MacArthur's analysis of the third day motif, which he equates with Mark's phrase 'on the first day'! His assertion that the fact that the women were known in the Urgemeinde cannot prevent legend since many legends are attested about the disciples is a petitio principii. He fails to come to grips with his own early dating and never shows how legend could develop in so short a span in the presence of those who knew better. For a critique of Pesch's position as well as a timely warning against New Testament exegesis's falling into the fallacies of the old history of religions school, see Peter Stuhlmacher, "'Kritischer müssten mir die Historisch-Kritischen sein!"', TQ 153 (1973) 244-51.
Mahoney, Disciples, 159. His further objection that this admission by the Jews is found only in a Christian document also misses the point; the course of the argument in the polemic presupposes the empty tomb. The Christians were doing their best to refute the charge of theft, an allegation which tacitly presupposes the tomb was empty.
Mahoney, Disciples, 159. His further objection that this admission by the Jews is found only in a Christian document also misses the point; the course of the argument in the polemic presupposes the empty tomb. The Christians were doing their best to refute the charge of theft, an allegation which tacitly presupposes the tomb was empty. | Get Dr. Craig's newsletter and keep up with RF news and events.
The Historicity of the Empty Tomb of Jesus
Summary
An examination of both Pauline and gospel material leads to eight lines of evidence in support of the conclusion that Jesus's tomb was discovered empty: (1) Paul's testimony implies the historicity of the empty tomb, (2) the presence of the empty tomb pericope in the pre-Markan passion story supports its historicity, (3) the use of 'on the first day of the week' instead of 'on the third day' points to the primitiveness of the tradition, (4) the narrative is theologically unadorned and non-apologetic, (5) the discovery of the tomb by women is highly probable, (6) the investigation of the empty tomb by the disciples is historically probable, (7) it would have been impossible for the disciples to proclaim the resurrection in Jerusalem had the tomb not been empty, (8) the Jewish polemic presupposes the empty tomb.
Until recently the empty tomb has been widely regarded as both an offense to modern intelligence and an embarrassment for Christian faith; an offense because it implies a nature miracle akin to the resuscitation of a corpse and an embarrassment because it is nevertheless almost inextricably bound up with Jesus' resurrection, which lies at the very heart of the Christian faith. But in the last several years, a remarkable change seems to have taken place, and the scepticism that so characterized earlier treatments of this problem appears to be fast receding. [1] Though some theologians still insist with Bultmann that the resurrection is not a historical event, [2] this incident is certainly presented in the gospels as a historical event, one of the manifestations of which was that the tomb of Jesus was reputedly found empty on the first day of the week by several of his women followers; this fact, at least, is therefore in principle historically verifiable. But how credible is the evidence for the historicity of Jesus' empty tomb?
In order to answer this question, we need to look first at one of the oldest traditions contained in the New Testament concerning the resurrection. | yes |
Archaeology | Was the tomb of Jesus discovered? | no_statement | the "tomb" of jesus was not "discovered".. there is no evidence to suggest that the "tomb" of jesus was "discovered".. the discovery of the "tomb" of jesus has not been confirmed. | https://roomfordoubt.com/post/what-evidence-is-there-if-any-of-jesus-burial-and-the-empty-tomb | What Evidence is There, If Any, of Jesus' Burial and the Empty Tomb ... | Cart
What Evidence is There, If Any, of Jesus’ Burial and the Empty Tomb?
Question submitted by Joseph L. on Room For Doubt’s Facebook page: “Is there any evidence outside the bible that Jesus was placed in a tomb to begin with, and not left up on the cross, or disposed of some other way?”
Some Responses:
Thanks for your great question! In making a historical case for the resurrection of Jesus, it is critical to consider whether or not there are any good reasons to believe that Jesus was buried in a tomb and that the tomb was found empty. Fortunately, there are at least three outstanding reasons to accept Jesus’ burial in a tomb that was later empty.
First, it is significant that most scholars believe that Jesus was buried in a known tomb owned by Joseph of Arimathea. The burial in a known tomb by Joseph of Arimathea is considered historical because the Gospels describe Joseph as a member of the Jewish Sanhedrin that condemned Jesus. He is unlikely to be a person that Christians made up, since he would be a well-known individual. The Sanhedrin was the seventy leading men in Judaism–the Jewish high court. It would not be helpful for the Gospel writers (writing within a generation of the life of Jesus) to make up the story that a specific Jewish leader buried Jesus if that could be easily denied by their Jewish opponents. It was also be embarrassing to the disciples that only a Jewish leader was brave enough to go to Pilate and ask to bury Jesus. Hostility between the Sanhedrin and early Christians was intense, and it is unlikely that the disciples would invent a story that a member of the Sanhedrin had the guts to go to Pilate and bury Jesus while all of them were hiding in fear. Historians find a story more credible when that story would be embarrassing to the person telling it.
A second reason to accept the empty tomb is that the tomb was first discovered empty by women. Women were not considered reliable witnesses in Israel at the time of Jesus. In fact, women could not even testify in a court of law. So historians find it unlikely that Christians would later make up the embarrassing fact that women were the first ones to discover the empty tomb. It would not have helped them to make their case for the resurrection in that culture, so the most likely reason that they report women being the ones to discover the empty tomb is that Jesus was buried in a tomb and women really did discover it empty.
Third, the first response by the Jewish leaders clearly admits to Jesus’ body missing from the known tomb. In Matthew 28:11-15, Matthew reports that the Jews claimed from the beginning that the disciples stole the body of Jesus out of the tomb. More than that, Matthew states that the Jews were still claiming this decades later (at the time when Matthew wrote his Gospel). It would not help him to say that the Jews are circulating this false story if they weren’t actually doing so. If those who rejected Christianity (the Jewish leaders) accept the empty tomb, then historians have excellent reason to believe it is historical. The Jewish leaders were not reporting that Jesus was disposed of in some unknown way; rather, they admitted that he was buried in the tomb and that the body was missing. They just tried to explain away the resurrection.
Ultimately, we don’t have any competing burial story from the first century that would contradict the report in the Gospels that Jesus was buried in a tomb by Joseph of Arimathea and that the tomb was found empty by women. Not only is there no competing account about what happened to the body of Jesus, but the empty tomb is historically plausible for the reasons given above. By far the most reasonable historical conclusion is to accept that Jesus was buried in a tomb and that the tomb became empty.
–Zach Breitenbach, Director of the Worldview Center at Connection Pointe in Brownsburg, IN and the former Associate Director of Room For Doubt.
Zach Breitenbach is the Director of the Worldview Center at Connection Pointe in Brownsburg, IN and the former Associate Director of Room For Doubt. He has degrees from N.C. State (BS, MBA), LCU (MA in Apologetics), and Liberty University (PhD in Theology & Apologetics). | Cart
What Evidence is There, If Any, of Jesus’ Burial and the Empty Tomb?
Question submitted by Joseph L. on Room For Doubt’s Facebook page: “Is there any evidence outside the bible that Jesus was placed in a tomb to begin with, and not left up on the cross, or disposed of some other way?”
Some Responses:
Thanks for your great question! In making a historical case for the resurrection of Jesus, it is critical to consider whether or not there are any good reasons to believe that Jesus was buried in a tomb and that the tomb was found empty. Fortunately, there are at least three outstanding reasons to accept Jesus’ burial in a tomb that was later empty.
First, it is significant that most scholars believe that Jesus was buried in a known tomb owned by Joseph of Arimathea. The burial in a known tomb by Joseph of Arimathea is considered historical because the Gospels describe Joseph as a member of the Jewish Sanhedrin that condemned Jesus. He is unlikely to be a person that Christians made up, since he would be a well-known individual. The Sanhedrin was the seventy leading men in Judaism–the Jewish high court. It would not be helpful for the Gospel writers (writing within a generation of the life of Jesus) to make up the story that a specific Jewish leader buried Jesus if that could be easily denied by their Jewish opponents. It was also be embarrassing to the disciples that only a Jewish leader was brave enough to go to Pilate and ask to bury Jesus. Hostility between the Sanhedrin and early Christians was intense, and it is unlikely that the disciples would invent a story that a member of the Sanhedrin had the guts to go to Pilate and bury Jesus while all of them were hiding in fear. Historians find a story more credible when that story would be embarrassing to the person telling it.
A second reason to accept the empty tomb is that the tomb was first discovered empty by women. Women were not considered reliable witnesses in Israel at the time of Jesus. In fact, women could not even testify in a court of law. So historians find it unlikely that Christians would later make up the embarrassing fact that women were the first ones to discover the empty tomb. | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://www.nbcnews.com/sciencemain/vlad-impaler-real-dracula-was-absolutely-vicious-8c11505315 | Vlad the Impaler: The real Dracula was absolutely vicious | Vlad the Impaler: The real Dracula was absolutely vicious
A portrait of Vlad the Impaler, circa 1450, from a painting in Castle Ambras in the Tyrol.Getty
Oct. 31, 2013, 5:22 PM UTC
By Marc Lallanilla
Few names have cast more terror into the human heart than Dracula. The legendary vampire, created by author Bram Stoker for his 1897 novel of the same name, has inspired countless horror movies, television shows and other bloodcurdling tales of vampires.
Though Dracula may seem like a singular creation, Stoker in fact drew inspiration from a real-life man with an even more grotesque taste for blood: Vlad III, Prince of Wallachia or — as he is better known — Vlad the Impaler (Vlad Tepes), a name he earned for his favorite way of dispensing with his enemies.
Vlad III was born in 1431 in Transylvania, a mountainous region in modern-day Romania. His father was Vlad II Dracul, ruler of Wallachia, a principality located to the south of Transylvania. Vlad II was granted the surname Dracul ("dragon") after his induction into the Order of the Dragon, a Christian military order supported by the Holy Roman emperor. [8 Grisly Archaeological Discoveries]
Situated between Christian Europe and the Muslim lands of the Ottoman Empire, Transylvania and Wallachia were frequently the scene of bloody battles as Ottoman forces pushed westward into Europe, and Christian Crusaders repulsed the invaders or marched eastward toward the Holy Land.
When Vlad II was called to a diplomatic meeting in 1442 with Sultan Murad II, he brought his young sons Vlad III and Radu along. But the meeting was actually a trap: All three were arrested and held hostage. The elder Vlad was released under the condition that he leave his sons behind.
A portrait of Vlad the Impaler, circa 1450, from a painting in Castle Ambras in the Tyrol.Getty
Years of captivity Under the Ottomans, Vlad and his younger brother were tutored in science, philosophy and the arts — Vlad also became a skilled horseman and warrior. According to some accounts, however, he may also have been imprisoned and tortured for part of that time, during which he would have witnessed the impalement of his the Ottomans' enemies.
The rest of Vlad's family, however, fared even worse: His father was ousted as ruler of Wallachia by local warlords (boyars) and was killed in the swamps near Balteni, Wallachia, in 1447. Vlad's older brother, Mircea, was tortured, blinded and buried alive.
Whether these events turned Vlad III Dracula ("son of the dragon") into a ruthless killer is a matter of historical speculation. What is certain, however, is that once Vlad was freed from Ottoman captivity shortly after his family's death, his reign of blood began. [7 Strange Ways Humans Act Like Vampires]
In 1453, the city of Constantinople fell to the Ottomans, threatening all of Europe with an invasion. Vlad was charged with leading a force to defend Wallachia from an invasion. His 1456 battle to protect his homeland was victorious: Legend holds that he personally beheaded his opponent, Vladislav II, in one-on-one combat.
Bran Castle is known more as Dracula's castle. Located in Transylvania, Romania, it is a major tourist attraction.Daniel Mihailescu / AFP / Getty Images
Though he was now ruler of the principality of Wallachia, his lands were in a ruinous state due to constant warfare and the internal strife caused by feuding boyars. To consolidate power, Vlad invited hundreds of them to a banquet. Knowing his authority would be challenged, he had his guests stabbed and their still-twitching bodies impaled.
What is impaling? Impaling is a particularly gruesome form of torture and death: A wood or metal pole is inserted through the body either front to back, or vertically, through the rectum or vagina. The exit wound could be near the victim's neck, shoulders or mouth.
In some cases, the pole was rounded, not sharp, to avoid damaging internal organs and thereby prolong the suffering of the victim. The pole was then raised vertically to display the victim's torment — it could take hours or days for the impaled person to die.
Though Vlad is widely credited with bringing order and stability to Wallachia, his rule was undisputedly vicious: Dozens of Saxon merchants in Kronstadt, who were once allied with the boyars, were also impaled in 1459.
The Ottoman Turks were never far from Vlad's thoughts — or his borders. When diplomatic envoys had an audience with Vlad in 1459, the diplomats declined to remove their hats, citing a religious custom. Commending them on their religious devotion, Vlad ensured that their hats would forever remain on their heads by having the hats nailed to the diplomats' skulls.
During one of his many successful campaigns against the Ottomans, Vlad wrote to a military ally in 1462, "I have killed peasants, men and women, old and young, who lived at Oblucitza and Novoselo, where the Danube flows into the sea … We killed 23,884 Turks, without counting those whom we burned in homes or the Turks whose heads were cut by our soldiers ...Thus, your highness, you must know that I have broken the peace."
Vlad's victories over the invading Ottomans were celebrated throughout Wallachia, Transylvania and the rest of Europe — even Pope Pius II was impressed. But Vlad also earned a much darker reputation: On one occasion, he reportedly dined among a veritable forest of defeated warriors writhing on impaled poles. It's not known whether tales of Vlad III Dracula dipping his bread in the blood of his victims are true, but stories about his unspeakable sadism swirled throughout Europe.
Tens of thousands killed In total, Vlad is estimated to have killed about 80,000 people through various means. This includes some 20,000 people who were impaled and put on display outside the city of Targoviste: The sight was so repulsive that the invading Ottoman Sultan Mehmed II, after seeing the scale of Vlad's carnage and the thousands of decaying bodies being picked apart by crows, turned back and retreated to Constantinople.
In 1476, while marching to yet another battle with the Ottomans, Vlad and a small vanguard of soldiers were ambushed, and Vlad was killed and beheaded — by most reports, his head was delivered to Mehmed II in Constantinople as a trophy to be displayed above the city's gates.
The Middle Ages were notoriously violent, and the name of Vlad III Dracula may have been a mere historical footnote were it not for an 1820 book by the British consul to Wallachia, William Wilkinson: "An Account of the Principalities of Wallachia and Moldavia: With Various Political Observations Relating to Them." Wilkinson delves into the history of the region, mentioning the notorious warlord Vlad Tepes.
Stoker, who never visited Vlad's homeland, was nonetheless known to have read Wilkinson's book. And if ever there were a historical figure to inspire a bloodthirsty, monstrous fictional character, Vlad III Dracula was one. | Vlad the Impaler: The real Dracula was absolutely vicious
A portrait of Vlad the Impaler, circa 1450, from a painting in Castle Ambras in the Tyrol. Getty
Oct. 31, 2013, 5:22 PM UTC
By Marc Lallanilla
Few names have cast more terror into the human heart than Dracula. The legendary vampire, created by author Bram Stoker for his 1897 novel of the same name, has inspired countless horror movies, television shows and other bloodcurdling tales of vampires.
Though Dracula may seem like a singular creation, Stoker in fact drew inspiration from a real-life man with an even more grotesque taste for blood: Vlad III, Prince of Wallachia or — as he is better known — Vlad the Impaler (Vlad Tepes), a name he earned for his favorite way of dispensing with his enemies.
Vlad III was born in 1431 in Transylvania, a mountainous region in modern-day Romania. His father was Vlad II Dracul, ruler of Wallachia, a principality located to the south of Transylvania. Vlad II was granted the surname Dracul ("dragon") after his induction into the Order of the Dragon, a Christian military order supported by the Holy Roman emperor. [8 Grisly Archaeological Discoveries]
Situated between Christian Europe and the Muslim lands of the Ottoman Empire, Transylvania and Wallachia were frequently the scene of bloody battles as Ottoman forces pushed westward into Europe, and Christian Crusaders repulsed the invaders or marched eastward toward the Holy Land.
When Vlad II was called to a diplomatic meeting in 1442 with Sultan Murad II, he brought his young sons Vlad III and Radu along. But the meeting was actually a trap: All three were arrested and held hostage. The elder Vlad was released under the condition that he leave his sons behind.
A portrait of Vlad the Impaler, circa 1450, from a painting in Castle Ambras in the Tyrol. | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://en.wikipedia.org/wiki/Vlad_the_Impaler | Vlad the Impaler - Wikipedia | He was the second son of Vlad Dracul, who became the ruler of Wallachia in 1436. Vlad and his younger brother, Radu, were held as hostages in the Ottoman Empire in 1442 to secure their father's loyalty. Vlad's eldest brother Mircea and their father were murdered after John Hunyadi, regent-governor of Hungary, invaded Wallachia in 1447. Hunyadi installed Vlad's second cousin, VladislavII, as the new voivode. Hunyadi launched a military campaign against the Ottomans in the autumn of 1448, and Vladislav accompanied him. Vlad broke into Wallachia with Ottoman support in October, but Vladislav returned, and Vlad sought refuge in the Ottoman Empire before the end of the year. Vlad went to Moldavia in 1449 or 1450 and later to Hungary.
Relations between Hungary and Vladislav later deteriorated, and in 1456 Vlad invaded Wallachia with Hungarian support. Vladislav died fighting against him. Vlad began a purge among the Wallachian boyars to strengthen his position. He came into conflict with the Transylvanian Saxons, who supported his opponents, Dan and Basarab Laiotă (who were Vladislav's brothers), and Vlad's illegitimate half-brother, Vlad Călugărul. Vlad plundered the Saxon villages, taking the captured people to Wallachia, where he had them impaled (which inspired his cognomen). Peace was restored in 1460.
The Ottoman Sultan, Mehmed II, ordered Vlad to pay homage to him personally, but Vlad had the Sultan's two envoys captured and impaled. In February 1462, he attacked Ottoman territory, massacring tens of thousands of Turks and Muslim Bulgarians. Mehmed launched a campaign against Wallachia to replace Vlad with Vlad's younger brother, Radu. Vlad attempted to capture the sultan at Târgoviște during the night of 16–17June 1462. The Sultan and the main Ottoman army left Wallachia, but more and more Wallachians deserted to Radu. Vlad went to Transylvania to seek assistance from Matthias Corvinus, King of Hungary, in late 1462, but Corvinus had him imprisoned.
Vlad was held in captivity in Visegrád from 1463 to 1475. During this period, anecdotes about his cruelty started to spread in Germany and Italy. He was released at the request of Stephen III of Moldavia in the summer of 1475. He fought in Corvinus's army against the Ottomans in Bosnia in early 1476. Hungarian and Moldavian troops helped him to force Basarab Laiotă (who had dethroned Vlad's brother, Radu) to flee from Wallachia in November. Basarab returned with Ottoman support before the end of the year. Vlad was killed in battle before 10January 1477. Books describing Vlad's cruel acts were among the first bestsellers in the German-speaking territories. In Russia, popular stories suggested that Vlad was able to strengthen his central government only by applying brutal punishments, and many 19th-century Romanian historians adopted a similar view. Vlad's patronymic inspired the name of Bram Stoker's literary vampire, Count Dracula.
Name
The name Dracula, which is now primarily known as the name of a vampire, was for centuries known as the sobriquet of VladIII.[5][6] Diplomatic reports and popular stories referred to him as Dracula, Dracuglia, or Drakula already in the 15thcentury.[5] He himself signed his two letters as "Dragulya" or "Drakulya" in the late 1470s.[7] His name had its origin in the sobriquet of his father, Vlad Dracul ("Vlad the Dragon" in medieval Romanian), who received it after he became a member of the Order of the Dragon.[8][9] Dracula is the Slavonicgenitive form of Dracul, meaning "[the son] of Dracul (or the Dragon)".[9][10] In modern Romanian, dracul means "the devil", which contributed to Vlad's reputation.[10]
Vlad III is known as Vlad Țepeș (or Vlad the Impaler) in Romanian historiography.[10] This sobriquet is connected to the impalement that was his favorite method of execution.[10] The Ottoman writer Tursun Beg referred to him as Kazıklı Voyvoda (Impaler Lord) around 1500.[10]Mircea the Shepherd, Voivode of Wallachia, used this sobriquet when referring to Vlad III in a letter of grant on 1April 1551.[11]
The house in the main square of Sighișoara where Vlad's father lived from 1431 to 1435
Vlad II Dracul seized Wallachia after the death of his half-brother Alexander I Aldea in 1436.[19][20] One of his charters (which was issued on 20January 1437) preserves the first reference to Vlad III and his elder brother, Mircea, mentioning them as their father's "firstborn sons".[14] They were mentioned in four further documents between 1437 and 1439.[14] The last of the four charters also refers to their younger brother, Radu.[14]
After a meeting with John Hunyadi, Voivode of Transylvania, Vlad II Dracul did not support an Ottoman invasion of Transylvania in March 1442.[21] The Ottoman Sultan, Murad II, ordered him to come to Gallipoli to demonstrate his loyalty.[22][23] Vlad and Radu accompanied their father to the Ottoman Empire, where they were all imprisoned.[23] Vlad Dracul was released before the end of the year, but Vlad and Radu remained hostages to secure his loyalty.[22] They were held imprisoned in the fortress of Eğrigöz (now Doğrugöz), according to contemporaneous Ottoman chronicles.[24][25] Their lives were especially in danger after their father supported Vladislaus, King of Poland and Hungary, against the Ottoman Empire during the Crusade of Varna in 1444.[26] Vlad II Dracul was convinced that his two sons would be "butchered for the sake of Christian peace," but neither Vlad nor Radu was murdered or mutilated after their father's rebellion.[26]
Vlad Dracul again acknowledged the sultan's suzerainty and promised to pay a yearly tribute to him in 1446 or 1447.[27] John Hunyadi (who had by then become the regent-governor of Hungary in 1446),[28] invaded Wallachia in November 1447.[29] The Byzantine historian Michael Critobulus wrote that Vlad and Radu fled to the Ottoman Empire, which suggests that the sultan had allowed them to return to Wallachia after their father paid homage to him.[29] Vlad Dracul and his eldest son, Mircea, were murdered.[29][18] Hunyadi made Vladislav II (son of Vlad Dracul's cousin, Dan II) the ruler of Wallachia.[29][18]
Reigns
First rule
Lands ruled around 1390 by Vlad the Impaler's grandfather, Mircea I of Wallachia (the lands on the right side of the Danube had been lost to the Ottomans before Vlad's reign)
Upon the death of his father and elder brother, Vlad became a potential claimant to Wallachia.[18] VladislavII of Wallachia accompanied John Hunyadi, who launched a campaign against the Ottoman Empire in September 1448.[30][31] Taking advantage of his opponent's absence, Vlad broke into Wallachia at the head of an Ottoman army in early October.[30][31] He had to accept that the Ottomans had captured the fortress of Giurgiu on the Danube and strengthened it.[32]
The Ottomans defeated Hunyadi's army in the Battle of Kosovo between 17and 18October.[33] Hunyadi's deputy, Nicholas Vízaknai, urged Vlad to come to meet him in Transylvania, but Vlad refused him.[31] VladislavII returned to Wallachia at the head of the remnants of his army.[32] Vlad was forced to flee to the Ottoman Empire by 7December 1448.[32][34]
We bring you the news that [Nicholas Vízaknai] writes to us and asks us to be so kind as to come to him until [John Hunyadi] ... returns from the war. We are unable to do this because an emissary from Nicopolis came to us ... and said with great certainty that [Murad II had defeated Hunyadi]. ... If we come to [Vízaknai] now, the [Ottomans] could come and kill both you and us. Therefore, we ask you to have patience until we see what has happened to [Hunyadi]. ... If he returns from the war, we will meet him, and we will make peace with him. But if you will be our enemies now, and if something happens, ... you will have to answer for it before God
In exile
Vlad first settled in Edirne in the Ottoman Empire after his fall.[35][36] Not long after, he moved to Moldavia, where BogdanII (his father's brother-in-law and possibly his maternal uncle) had mounted the throne with John Hunyadi's support in the autumn of 1449.[35][36] After Bogdan was murdered by Peter III Aaron in October 1451, Bogdan's son, Stephen, fled to Transylvania with Vlad to seek assistance from Hunyadi.[35][37] However, Hunyadi concluded a three-year truce with the Ottoman Empire on 20November 1451,[38] acknowledging the Wallachian boyars' right to elect the successor of VladislavII if he died.[37]
Vlad allegedly wanted to settle in Brașov (which was a centre of the Wallachian boyars expelled by VladislausII), but Hunyadi forbade the burghers to give shelter to him on 6February 1452.[37][39] Vlad returned to Moldavia where Alexăndrel had dethroned Peter Aaron.[40] The events of his life during the years that followed are unknown.[40] He must have returned to Hungary before 3July 1456 because, on that day, Hunyadi informed the townspeople of Brașov that he had tasked Vlad with the defence of the Transylvanian border.[41]
Second rule
Consolidation
The circumstances and the date of Vlad's return to Wallachia are uncertain.[41] He invaded Wallachia with Hungarian support either in April, July or August 1456.[42][43] VladislavII died during the invasion.[43] Vlad sent his first extant letter as voivode of Wallachia to the burghers of Brașov on 10September.[42] He promised to protect them in case of an Ottoman invasion of Transylvania, but he also sought their assistance if the Ottomans occupied Wallachia.[42] In the same letter, he stated that "when a man or a prince is strong and powerful he can make peace as he wants to; but when he is weak, a stronger one will come and do what he wants to him",[44] showing his authoritarian personality.[42]
Multiple sources (including Laonikos Chalkokondyles's chronicle) recorded that hundreds or thousands of people were executed at Vlad's order at the beginning of his reign.[45] He began a purge against the boyars who had participated in the murder of his father and elder brother or whom he suspected of plotting against him.[46] Chalkokondyles stated that Vlad "quickly effected a great change and utterly revolutionized the affairs of Wallachia" through granting the "money, property, and other goods" of his victims to his retainers.[45] The lists of the members of the princely council during Vlad's reign also show that only two of them (Voico Dobrița and Iova) were able to retain their positions between 1457 and 1461.[47]
Conflict with the Saxons
Vlad sent the customary tribute to the sultan.[48] After John Hunyadi died on 11August 1456, his elder son, Ladislaus Hunyadi became the captain-general of Hungary.[49] He accused Vlad of having "no intention of remaining faithful" to the king of Hungary in a letter to the burghers of Brașov, also ordering them to support VladislausII's brother, DanIII, against Vlad.[42][50] The burghers of Sibiu supported another pretender, a “priest of the Romanians who calls himself a Prince's son".[51] The latter (identified as Vlad's illegitimate brother, Vlad Călugărul)[42][52] took possession of Amlaș, which had customarily been held by the rulers of Wallachia in Transylvania.[51]
Ladislaus V of Hungary had Ladislaus Hunyadi executed on 16March 1457.[53] Hunyadi's mother, Elizabeth Szilágyi, and her brother, Michael Szilágyi, stirred up a rebellion against the king.[53] Taking advantage of the civil war in Hungary, Vlad assisted Stephen, son of BogdanII of Moldavia, in his move to seize Moldavia in June 1457.[54][55] Vlad also broke into Transylvania and plundered the villages around Brașov and Sibiu.[56] The earliest German stories about Vlad recounted that he had carried "men, women, children" from a Saxon village to Wallachia and had them impaled.[57] Since the Transylvanian Saxons remained loyal to the king, Vlad's attack against them strengthened the position of the Szilágyis.[56]
Vlad's representatives participated in the peace negotiations between Michael Szilágyi and the Saxons.[56] According to their treaty, the burghers of Brașov agreed that they would expel Dan from their town.[58][59] Vlad promised that the merchants of Sibiu could freely "buy and sell" goods in Wallachia in exchange for the "same treatment" of the Wallachian merchants in Transylvania.[59] Vlad referred to Michael Szilágyi as "his Lord and elder brother" in a letter on 1December 1457.[60]
Ladislaus Hunyadi's younger brother, Matthias Corvinus, was elected king of Hungary on 24January 1458.[61] He ordered the burghers of Sibiu to keep the peace with Vlad on 3March.[62][63] Vlad styled himself "Lord and ruler over all of Wallachia, and the duchies of Amlaș and Făgăraș" on 20September 1459, showing that he had taken possession of both of these traditional Transylvanian fiefs of the rulers of Wallachia.[64][65] Michael Szilágyi allowed the boyar Michael (an official of VladislavII of Wallachia)[66] and other Wallachian boyars to settle in Transylvania in late March 1458.[63] Before long, Vlad had the boyar Michael killed.[67]
In May, Vlad asked the burghers of Brașov to send craftsmen to Wallachia, but his relationship with the Saxons deteriorated before the end of the year.[68] According to a scholarly theory, the conflict emerged after Vlad forbade the Saxons to enter Wallachia, forcing them to sell their goods to Wallachian merchants at compulsory border fairs.[69] Vlad's protectionist tendencies or border fairs are not documented.[70] Instead, in 1476, Vlad emphasized that he had always promoted free trade during his reign.[71]
The Saxons confiscated the steel that a Wallachian merchant had bought in Brașov without repaying the price to him.[72] In response, Vlad "ransacked and tortured" some Saxon merchants, according to a letter that Basarab Laiotă (a son of DanII of Wallachia)[73] wrote on 21January 1459.[74] Basarab had settled in Sighișoara and laid claim to Wallachia.[74] However, Matthias Corvinus supported DanIII (who was again in Brașov) against Vlad.[74] DanIII stated that Vlad had Saxon merchants and their children impaled or burnt alive in Wallachia.[74]
You know that King Matthias has sent me, and when I came to Țara Bârsei the officials and councillors of Brașov and the old men of Țara Bârsei cried to us with broken hearts about the things which Dracula, our enemy, did; how he did not remain faithful to our Lord, the king, and had sided with the [Ottomans]. ... [H]e captured all the merchants of Brașov and Țara Bârsei who had gone in peace to Wallachia and took all their wealth, but he was not satisfied only with the wealth of these people, but he imprisoned them and impaled them, 41 in all. Nor were these people enough; he became even more evil and gathered 300 boys from Brașov and Țara Bârsei that he found in ... Wallachia. Of these, he impaled some and burned others.
Dan III broke into Wallachia, but Vlad defeated and executed him before 22April 1460.[75][76] Vlad invaded southern Transylvania and destroyed the suburbs of Brașov, ordering the impalement of all men and women who had been captured.[77] During the ensuing negotiations, Vlad demanded the expulsion or punishment of all Wallachian refugees from Brașov.[77] Peace had been restored before 26July 1460, when Vlad addressed the burghers of Brașov as his "brothers and friends".[78] Vlad invaded the region around Amlaș and Făgăraș on 24August to punish the local inhabitants who had supported DanIII.[48][79]
Ottoman war
Konstantin Mihailović (who served as a janissary in the sultan's army) recorded that Vlad refused to pay homage to the sultan in an unspecified year.[80] The Renaissance historian Giovanni Maria degli Angiolelli likewise wrote that Vlad had failed to pay tribute to the sultan for three years.[80] Both records suggest that Vlad ignored the suzerainty of the Ottoman Sultan, Mehmed II, already in 1459, but both works were written decades after the events.[81]Tursun Beg (a secretary in the sultan's court) stated that Vlad only turned against the Ottoman Empire when the sultan "was away on the long expedition in Trebizon" in 1461.[82] According to Tursun Beg, Vlad started new negotiations with Matthias Corvinus, but the sultan was soon informed by his spies.[83][84] Mehmed sent his envoy, the Greek Thomas Katabolinos (also known as Yunus bey), to Wallachia, ordering Vlad to come to Constantinople.[83][84] He also sent secret instructions to Hamza, bey of Nicopolis, to capture Vlad after he crossed the Danube.[85][86] Vlad found out the sultan's "deceit and trickery", captured Hamza and Katabolinos, and had them executed.[85][86]
After the execution of the Ottoman officials, Vlad gave orders in fluent Turkish to the commander of the fortress of Giurgiu to open the gates, enabling the Wallachian soldiers to break into the fortress and capture it.[86] He invaded the Ottoman Empire, devastating the villages along the Danube.[87] He informed Matthias Corvinus about the military action in a letter on 11February 1462.[88] He stated that more than "23,884 Turks and Bulgarians" had been killed at his order during the campaign.[87][88] He sought military assistance from Corvinus, declaring that he had broken the peace with the sultan "for the honor" of the king and the Holy Crown of Hungary and "for the preservation of Christianity and the strengthening of the Catholic faith".[88] The relationship between Moldavia and Wallachia had become tense by 1462, according to a letter of the Genoese governor of Kaffa.[88]
Having learnt of Vlad's invasion, Mehmed II raised an army of more than 150,000 strong that was said to be "second in size only to the one"[89] that occupied Constantinople in 1453, according to Chalkokondyles.[90][91] The size of the army suggests that the sultan wanted to occupy Wallachia, according to a number of historians (including Franz Babinger, Radu Florescu, and Nicolae Stoicescu).[92][90][91] On the other hand, Mehmed had granted Wallachia to Vlad's brother, Radu, before the invasion of Wallachia, showing that the sultan's principal purpose was only the change of the ruler of Wallachia.[92]
The Ottoman fleet landed at Brăila (which was the only Wallachian port on the Danube) in May.[90] The main Ottoman army crossed the Danube under the command of the sultan at Nikopol, Bulgaria on 4June 1462.[93][94] Outnumbered by the enemy, Vlad adopted a scorched earth policy and retreated towards Târgoviște.[95] During the night of 16–17 June, Vlad broke into the Ottoman camp in an attempt to capture or kill the sultan.[93] Either the imprisonment or the death of the sultan would have caused panic among the Ottomans, which could have enabled Vlad to defeat the Ottoman army.[93][95] However, the Wallachians "missed the court of the sultan himself"[96] and attacked the tents of the viziersMahmud Pasha and Isaac.[95] Having failed to attack the sultan's camp, Vlad and his retainers left the Ottoman camp at dawn.[97] Mehmed entered Târgoviște at the end of June.[93] The town had been deserted, but the Ottomans were horrified to discover a "forest of the impaled" (thousands of stakes with the carcasses of executed people), according to Chalkokondyles.[98]
The sultan's army entered into the area of the impalements, which was seventeen stades long and seven stades wide. There were large stakes there on which, as it was said, about twenty thousand men, women, and children had been spitted, quite a sight for the Turks and the sultan himself. The sultan was seized with amazement and said that it was not possible to deprive of his country a man who had done such great deeds, who had such a diabolical understanding of how to govern his realm and its people. And he said that a man who had done such things was worth much. The rest of the Turks were dumbfounded when they saw the multitude of men on the stakes. There were infants too affixed to their mothers on the stakes, and birds had made their nests in their entrails.
Tursun Beg recorded that the Ottomans suffered from the summer heat and thirst during the campaign.[100] The sultan decided to retreat from Wallachia and marched towards Brăila.[86] StephenIII of Moldavia hurried to Chilia (now Kiliya in Ukraine) to seize the important fortress where a Hungarian garrison had been placed.[91][101][102] Vlad also departed for Chilia, but left behind a troop of 6,000 strong to try to hinder the march of the sultan's army, but the Ottomans defeated the Wallachians.[100] Stephen of Moldavia was wounded during the siege of Chilia and returned to Moldavia before Vlad came to the fortress.[103]
The main Ottoman army left Wallachia, but Vlad's brother Radu and his Ottoman troops stayed behind in the Bărăgan Plain.[104] Radu sent messengers to the Wallachians, reminding them that the sultan could again invade their country.[104] Although Vlad defeated Radu and his Ottoman allies in two battles during the following months, more and more Wallachians deserted to Radu.[105][106] Vlad withdrew to the Carpathian Mountains, hoping that Matthias Corvinus would help him regain his throne.[107] However, Albert of Istenmező, the deputy of the Count of the Székelys, had recommended in mid-August that the Saxons recognize Radu.[105] Radu also made an offer to the burghers of Brașov to confirm their commercial privileges and pay them a compensation of 15,000 ducats.[105]
Imprisonment in Hungary
Matthias Corvinus came to Transylvania in November 1462.[108] The negotiations between Corvinus and Vlad lasted for weeks,[109] but Corvinus did not want to wage war against the Ottoman Empire.[110][111] At the king's order, his Czech mercenary commander, John Jiskra of Brandýs, captured Vlad near Rucăr in Wallachia.[108][110]
To provide an explanation for Vlad's imprisonment to Pope Pius II and the Venetians (who had sent money to finance a campaign against the Ottoman Empire), Corvinus presented three letters, allegedly written by Vlad on 7November 1462, to MehmedII, Mahmud Pasha, and Stephen of Moldavia.[108][109] According to the letters, Vlad offered to join his forces with the sultan's army against Hungary if the sultan restored him to his throne.[112] Most historians agree that the documents were forged to give grounds for Vlad's imprisonment.[110][112] Corvinus's court historian, Antonio Bonfini, admitted that the reason for Vlad's imprisonment was never clarified.[110] Florescu writes, "[T]he style of writing, the rhetoric of meek submission (hardly compatible with what we know of Dracula's character), clumsy wording, and poor Latin" are all evidence that the letters could not be written on Vlad's order.[112] He associates the author of the forgery with a Saxon priest of Brașov.[112]
Vlad was first imprisoned "in the city of Belgrade"[113] (now Alba Iulia in Romania), according to Chalkokondyles.[114] Before long, he was taken to Visegrád, where he was held for fourteen years.[114] No documents referring to Vlad between 1462 and 1475 have been preserved.[115] In the summer of 1475, StephenIII of Moldavia sent his envoys to Matthias Corvinus, asking him to send Vlad to Wallachia against Basarab Laiotă, who had submitted himself to the Ottomans.[108] Stephen wanted to secure Wallachia for a ruler who had been an enemy of the Ottoman Empire, because "the Wallachians [were] like the Turks" to the Moldavians, according to his letter.[116] According to the Slavic stories about Vlad, he was only released after he converted to Catholicism.[2]
Third rule and death
Matthias Corvinus recognized Vlad as the lawful prince of Wallachia, but he did not provide him with military assistance to regain his principality.[108] Vlad settled in a house in Pest.[117] When a group of soldiers broke into the house while pursuing a thief who had tried to hide there, Vlad had their commander executed because they had not asked his permission before entering his home, according to the Slavic stories about his life.[116] Vlad moved to Transylvania in June 1475.[118] He wanted to settle in Sibiu and sent his envoy to the town in early June to arrange a house for him.[118] MehmedII acknowledged Basarab Laiotă as the lawful ruler of Wallachia.[118] Corvinus ordered the burghers of Sibiu to give 200 golden florins to Vlad from the royal revenues on 21September, but Vlad left Transylvania for Buda in October.[119]
Vlad bought a house in Pécs that became known as Drakula háza ("Dracula's house" in Hungarian).[120] In January 1476 John Pongrác of Dengeleg, Voivode of Transylvania urged the people of Brașov to send to Vlad all those of his supporters who had settled in the town, because Corvinus and Basarab Laiotă had concluded a treaty.[120] The relationship between the Transylvanian Saxons and Basarab remained tense, and the Saxons gave shelter to Basarab's opponents during the following months.[120] Corvinus dispatched Vlad and the Serbian Vuk Grgurević to fight against the Ottomans in Bosnia in early 1476.[2][121] They captured Srebrenica and other fortresses in February and March 1476.[2] In the Bosnian campaign, Vlad once again resorted to his terror tactics, mass impaling captured Turkish soldiers and massacring civilians in conquered settlements. His troops mostly destroyed Srebrenica, Kuslat, and Zvornik.[122]
Basarab Laiotă, who tried to defend his throne against Vlad with Ottoman support
Mehmed II invaded Moldavia and defeated StephenIII in the Battle of Valea Albă on 26July 1476.[123]Stephen Báthory and Vlad entered Moldavia, forcing the sultan to lift the siege of the fortress at Târgu Neamț in late August, according to a letter of Matthias Corvinus.[124] The contemporaneous Jakob Unrest added that Vuk Grgurević and a member of the noble Jakšić family also participated in the struggle against the Ottomans in Moldavia.[124]
Matthias Corvinus ordered the Transylvanian Saxons to support Báthory's planned invasion of Wallachia on 6September 1476, also informing them that Stephen of Moldavia would also invade Wallachia.[125] Vlad stayed in Brașov and confirmed the commercial privileges of the local burghers in Wallachia on 7October 1476.[125] Báthory's forces captured Târgoviște on 8November.[125] Stephen of Moldavia and Vlad ceremoniously confirmed their alliance, and they occupied Bucharest, forcing Basarab Laiotă to seek refuge in the Ottoman Empire on 16November.[125] Vlad informed the merchants of Brașov about his victory, urging them to come to Wallachia.[126] He was crowned before 26November.[120]
Basarab Laiotă returned to Wallachia with Ottoman support, and Vlad died fighting against them in late December 1476 or early January 1477.[127][120] In a letter written on 10January 1477, StephenIII of Moldavia related that Vlad's Moldavian retinue had also been massacred.[128] According to the "most reliable sources", Vlad's army of about 2,000 was cornered and destroyed by a Turkish-Basarab force of 4,000 near Snagov.[129] The exact circumstances of his death are unclear. The Austrian chronicler Jacob Unrest stated that a disguised Turkish assassin murdered Vlad in his camp. In contrast, Russian statesman Fyodor Kuritsyn –who interviewed Vlad's family after his demise– reported that the voivode was mistaken for a Turk by his own troops during battle, causing them to attack and kill him. Florescu and Raymond T. McNally commented this account by noting that Vlad had often disguised himself as a Turkish soldier as part of military ruses.[129] According to Leonardo Botta, the Milanese ambassador to Buda, the Ottomans cut Vlad's corpse into pieces.[128][127] Bonfini wrote that Vlad's head was sent to MehmedII;[130] it was eventually placed on a high stake in Constantinople.[127] His decapitated head allegedly was displayed and buried in Voivode Street (today Bankalar Caddesi) in Karaköy. It is rumoured that Voyvoda Han, located on Bankalar Caddesi No. 19, was the last stop of Vlad Tepeş's skull.[131][132] Local peasant traditions maintain that what was left of Vlad's corpse was later discovered in the marshes of Snagov by monks from the nearby monastery.[133]
The place of his burial is unknown.[134] According to popular tradition (which was first recorded in the late 19th century),[135] Vlad was buried in the Monastery of Snagov.[136] However, the excavations carried out by Dinu V. Rosetti in 1933 found no tomb below the supposed "unmarked tombstone" of Vlad in the monastery church. Rosetti reported: "Under the tombstone attributed to Vlad, there was no tomb. Only many bones and jaws of horses."[135] Historian Constantin Rezachevici said Vlad was most probably buried in the first church of the Comana Monastery, which had been established by Vlad and was near the battlefield where he was killed.[135]
Vlad had two wives, according to modern specialists.[139][140] His first wife may have been an illegitimate daughter of John Hunyadi, according to historian Alexandru Simon.[139] Vlad's second wife was Justina Szilágyi, who was a cousin of Matthias Corvinus.[139][141] She was the widow of Vencel Pongrác of Szentmiklós when "Ladislaus Dragwlya" married her, most probably in 1475.[142] She survived Vlad Dracul, and married thirdly Pál Suki, then János Erdélyi.[141]
Vlad's eldest son,[143]Mihnea, was born in 1462.[144] Vlad's unnamed second son was killed before 1486.[143] Vlad's third son, Vlad Drakwlya, unsuccessfully laid claim to Wallachia around 1495.[143][145] He was the forefather of the noble Drakwla family.[143]
Legacy
Reputation for cruelty
First records
Stories about Vlad's brutal acts began circulating during his lifetime.[146] After his arrest, courtiers of Matthias Corvinus promoted their spread.[147] The papal legate, Niccolo Modrussiense, had already written about such stories to Pope PiusII in 1462.[148] Two years later, the Pope included them in his Commentaries.[149]
MeistersingerMichael Beheim wrote a lengthy poem about Vlad's deeds, allegedly based on his conversation with a Catholic monk who had managed to escape from Vlad's prison.[149] The poem, called Von ainem wutrich der heis Trakle waida von der Walachei ("Story of a Despot Called Dracula, Voievod of Wallachia"), was performed at the court of Frederick III, Holy Roman Emperor in Wiener Neustadt during the winter of 1463.[149][150] According to one of Beheim's stories, Vlad had two monks impaled to assist them to go to heaven, also ordering the impalement of their donkey because it began braying after its masters' death.[149] Beheim also accused Vlad of duplicity, stating that Vlad had promised support to both Matthias Corvinus and MehmedII but did not keep the promise.[149]
In 1475, Gabriele Rangoni, Bishop of Eger (and a former papal legate),[151] understood that Vlad had been imprisoned because of his cruelty.[152] Rangoni also recorded the rumour that while in prison Vlad caught rats to cut them up into pieces or stuck them on small pieces of wood, because he was unable to "forget his wickedness".[152][153] Antonio Bonfini also recorded anecdotes about Vlad in his Historia Pannonica around 1495.[154] Bonfini wanted to justify both the removal and the restoration of Vlad by Matthias.[154] He described Vlad as "a man of unheard cruelty and justice".[155] Bonfini's stories about Vlad were repeated in Sebastian Münster's Cosmography.[148] Münster also recorded Vlad's "reputation for tyrannical justice".[148]
... Turkish messengers came to [Vlad] to pay respects, but refused to take off their turbans, according to their ancient custom, whereupon he strengthened their custom by nailing their turbans to their heads with three spikes, so that they could not take them off.
German stories
1499 German woodcut showing Dracule waide dining among the impaled corpses of his victims
Works containing the stories about Vlad's cruelty were published in Low German in the Holy Roman Empire before 1480.[157][158] The stories were allegedly written in the early 1460s, because they describe Vlad's campaign across the Danube in early 1462, but they do not refer to MehmedII's invasion of Wallachia in June of the same year.[159] They provide a detailed narration of the conflicts between Vlad and the Transylvanian Saxons, showing that they originated "in the literary minds of the Saxons".[157]
The stories about Vlad's plundering raids in Transylvania were clearly based on an eyewitness account, because they contain accurate details (including the lists of the churches destroyed by Vlad and the dates of the raids).[159] They describe Vlad as a "demented psychopath, a sadist, a gruesome murderer, a masochist", worse than Caligula and Nero.[158] However, the stories emphasizing Vlad's cruelty are to be treated with caution[160] because his brutal acts were very probably exaggerated (or even invented) by the Saxons.[161]
The invention of movable type printing contributed to the popularity of the stories about Vlad, making them one of the first "bestsellers" in Europe.[115] To enhance sales, they were published in books with woodcuts on their title pages that depicted horrific scenes.[162] For instance, the editions published in Nuremberg in 1499 and in Strasbourg in 1500 depict Vlad dining at a table surrounded by dead or dying people on poles.[162]
... [Vlad] had a big copper cauldron built and put a lid made of wood with holes in it on top. He put the people in the cauldron and put their heads in the holes and fastened them there; then he filled it with water and set a fire under it and let the people cry their eyes out until they were boiled to death. And then he invented frightening, terrible, unheard of tortures. He ordered that women be impaled together with their suckling babies on the same stake. The babies fought for their lives at their mother's breasts until they died. Then he had the women's breasts cut off and put the babies inside headfirst; thus he had them impaled together.
Slavic stories
There are more than twenty manuscripts (written between the 15th and 18th centuries)[163] which preserved the text of the Skazanie o Drakule voievode (The Tale about Voivode Dracula).[164] The manuscripts were written in Russian, but they copied a text that had originally been recorded in a South Slavic language, because they contain expressions alien to the Russian language but used in South Slavic idioms (such as diavol for "evil").[165] The original text was written in Buda between 1482 and 1486.[166]
The nineteen anecdotes in the Skazanie are longer than the German stories about Vlad.[163] They are a mixture of fact and fiction, according to historian Raymond T. McNally.[163] Almost half of the anecdotes emphasize, like the German stories, Vlad's brutality, but they also underline that his cruelty enabled him to strengthen the central government in Wallachia.[167][168] For instance, the Skazanie writes of a golden cup that nobody dared to steal at a fountain[169] because Vlad "hated stealing so violently ... that anybody who caused any evil or robbery ... did not live long", thereby promoting public order, and the German story about Vlad's campaign against Ottoman territory underlined his cruel acts while the Skazanie emphasized his successful diplomacy[170] calling him "zlomudry" or "evil-wise". On the other hand, the Skazanie sharply criticized Vlad for his conversion to Catholicism, attributing his death to this apostasy.[3] Some elements of the anecdotes were later added to Russian stories about Ivan the Terrible of Russia.[171]
Assertion by modern standards
The mass murders that Vlad carried out indiscriminately and brutally would most likely amount to acts of genocide and war crimes by current standards.[172] Romanian defense minister Ioan Mircea Pașcu asserted that Vlad would have been condemned for crimes against humanity had he been put on trial at Nuremberg.[173]
National hero
The Cantacuzino Chronicle was the first Romanian historical work to record a tale about Vlad the Impaler, narrating the impalement of the old boyars of Târgoviște for the murder of his brother, Dan.[174] The chronicle added that Vlad forced the young boyars and their wives and children to build the Poenari Castle.[174] The legend of the Poenari Castle was mentioned in 1747 by NeofitI, Metropolitan of Ungro–Wallachia, who complemented it with the story of Meșterul Manole, who allegedly walled in his bride to prevent the crumbling of the walls of the castle during the building project.[174][175] In the early 20th century, Constantin Rădulescu-Codin, a teacher in Muscel County where the castle was situated,[175] published a local legend about Vlad's letter of grant "written on rabbit skin" for the villagers who had helped him to escape from Poenari Castle to Transylvania during the Ottoman invasion of Wallachia.[176] In other villages of the region, the donation is attributed to the legendary Radu Negru.[177]
Rădulescu-Codin recorded further local legends,[178] some of which are also known from the German and Slavic stories about Vlad, suggesting that the latter stories preserved oral tradition.[179] For instance, the tales about the burning of the lazy, the poor, and the lame at Vlad's order and the execution of the woman who had made her husband too short a shirt can also be found among the German and Slavic anecdotes.[180] The peasants telling the tales knew that Vlad's sobriquet was connected to the frequent impalements during his reign, but they said only such cruel acts could secure public order in Wallachia.[181]
Most Romanian artists have regarded Vlad as a just ruler and a realistic tyrant who punished criminals and executed unpatriotic boyars to strengthen the central government.[182]Ion Budai-Deleanu wrote the first Romanian epic poem focusing on him.[182] Deleanu's Țiganiada (Gypsy Epic) (which was published only in 1875, almost a century after its composition) presented Vlad as a hero fighting against the boyars, Ottomans, strigoi (or vampires), and other evil spirits at the head of an army of gypsies and angels.[183] The poet Dimitrie Bolintineanu emphasized Vlad's triumphs in his Battles of the Romanians in the middle of the 19thcentury.[184] He regarded Vlad as a reformer whose acts of violence were necessary to prevent the despotism of the boyars.[185] One of the greatest Romanian poets, Mihai Eminescu, dedicated a historic ballad, The Third Letter, to the valiant princes of Wallachia, including Vlad.[186] He urges Vlad to return from the grave and to annihilate the enemies of the Romanian nation:[186]
You must come, O dread Impaler, confound them to your care. Split them in two partitions, here the fools, the rascals there; Shove them into two enclosures from the broad daylight enisle 'em, Then set fire to the prison and the lunatic asylum.
In the early 1860s, the painter Theodor Aman depicted the meeting of Vlad and the Ottoman envoys, showing the envoys' fear of the Wallachian ruler.[187]
Since the middle of the 19th century, Romanian historians have treated Vlad as one of the greatest Romanian rulers, emphasizing his fight for the independence of the Romanian lands.[184][188] Even Vlad's acts of cruelty were often represented as rational acts serving national interest.[189]Alexandru Dimitrie Xenopol was one of the first historians to emphasize that Vlad could only stop the internal fights of the boyar parties through his acts of terror.[185]Constantin C. Giurescu remarked, "The tortures and executions which [Vlad] ordered were not out of caprice, but always had a reason, and very often a reason of state".[189] Ioan Bogdan was one of the few Romanian historians who did not accept this heroic image.[190] In his work published in 1896, Vlad Țepeș and the German and Russian Narratives, he concluded that the Romanians should be ashamed of Vlad, instead of presenting him as "a model of courage and patriotism".[185] According to an opinion poll conducted in 1999, 4.1% of the participants chose Vlad the Impaler as one of "the most important historical personalities who have influenced the destiny of the Romanians for the better".[191]
Vampire mythology
The stories about Vlad made him the best-known medieval ruler of the Romanian lands in Europe.[192] However, Bram Stoker's Dracula, which was published in 1897, was the first book to make a connection between Dracula and vampirism.[193] Stoker had his attention drawn to the blood-sucking vampires of Romanian folklore by Emily Gerard's article about Transylvanian superstitions (published in 1885).[194] His limited knowledge about the medieval history of Wallachia came from William Wilkinson's book entitled Account of the Principalities of Wallachia and Moldavia with Political Observations Relative to Them, published in 1820.[195][196]
Stoker "apparently did not know much about" Vlad the Impaler, "certainly not enough for us to say that Vlad was the inspiration for" Count Dracula, according to Elizabeth Miller.[197] For instance, Stoker wrote that Dracula had been of Székely origin only because he knew about both Attila the Hun's destructive campaigns and the alleged Hunnic origin of the Székelys.[198] Stoker's main source, Wilkinson, who accepted the reliability of the German stories, described Vlad as a wicked man.[199] Actually, Stoker's working papers for his book contain no references to the historical figure,[196] the name of the character being named in all drafts but the later ones 'Count Wampyr'. Consequently, Stoker borrowed the name and "scraps of miscellaneous information" about the history of Wallachia when writing his book about Count Dracula.[196]
Appearance and representations
Pope Pius II's legate, Niccolò Modrussa, painted the only extant description of Vlad, whom he had met in Buda.[200] A copy of Vlad's portrait has been featured in the "monster portrait gallery" in the Ambras Castle at Innsbruck.[201] The picture depicts "a strong, cruel, and somehow tortured man" with "large, deep-set, dark green, and penetrating eyes", according to Florescu.[201] The colour of Vlad's hair cannot be determined because Modrussa mentions that Vlad was black-haired, while the portrait seems to show that he had fair hair.[201] The picture depicts Vlad with a large lower lip.[201]
Vlad's bad reputation in the German-speaking territories can be detected in a number of Renaissance paintings.[202] He was portrayed among the witnesses of Saint Andrew's martyrdom in a 15th-century painting, displayed in the Belvedere in Vienna.[202] A figure similar to Vlad is one of the witnesses of Christ in the Calvary in a chapel of the St. Stephen's Cathedral in Vienna.[202]
[Vlad] was not very tall, but very stocky and strong, with a cold and terrible appearance, a strong and aquiline nose, swollen nostrils, a thin and reddish face in which the very long eyelashes framed large wide-open green eyes; the bushy black eyebrows made them appear threatening. His face and chin were shaven but for a moustache. The swollen temples increased the bulk of his head. A bull's neck connected [with] his head from which black curly locks hung on his wide-shouldered person.
In popular culture
The play about Vlad the Impaler A Treia țeapă (The Third Stake) (1978) was written by Marin Sorescu and staged at the height of Nicolae Ceaușescu's totalitarian regime. The play focused on cruelty and ultimate failure of the absolute power of the historical Vlad Țepeș. It was translated into English in 1987 as Vlad Dracula the Impaler.[204]
In the light novelFate/Apocrypha (2012–2014), Vlad III appears under the title of "Lancer of Black". In this incarnation, he is a Heroic Spirit, or Servant; Vlad is summoned to fight in an event called the Great Holy Grail War, alongside (and against) other summoned heroes. He has the ability to recreate and summon the "forest of the impaled", but also the ability to transform into a vampire, from his name's association with Bram Stoker’s Dracula, which he despises.[209]
Secondary sources
Andreescu, Ștefan (1991). "Military actions of Vlad Țepeș in South-Eastern Europe in 1476". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 135–151. ISBN978-0-88033-220-0.
Balotă, Anton (1991). "An analysis of the Dracula tales". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 153–184. ISBN978-0-88033-220-0.
Cain, Jimmie E. (2006). Bram Stoker and Russophobia: Evidence of the British Fear of Russia in Dracula and The Lady of the Shroud. McFarland & Company, Inc., Publishers. ISBN978-0-7864-2407-8.
Cazacu, Matei (1991). "The reign of Dracula in 1448". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 53–61. ISBN978-0-88033-220-0.
Cornis-Pope, Marcel; Neubauer, John (2004). History of the Literary Cultures of East-Central Europe: junctures and disjunctures in the 19th and 20th centuries.
Florescu, Radu R. (1991). "A genealogy of the family of Vlad Țepeș". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 249–252. ISBN978-0-88033-220-0.
McNally, Raymond T. (1991). "Vlad Țepeș in Romanian folklore". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 197–228. ISBN978-0-88033-220-0.
Nandriș, Grigore (1991). "A philological analysis of Dracula and Romanian place-names and masculine personal names in.a/ea". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 229–237. ISBN978-0-88033-220-0.
Panaitescu, P. P. (1991). "The German stories about Vlad Țepeș". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 185–196. ISBN978-0-88033-220-0.
Rezachevici, Constantin (1991). "Vlad Țepeș – Chronology and historical bibliography". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 253–294. ISBN978-0-88033-220-0.
Stoicescu, Nicolae (1991). "Vlad Țepeș' relations with Transylvania and Hungary". In Treptow, Kurt W. (ed.). Dracula: Essays on the Life and Times of Vlad Țepeș. East European Monographs, Distributed by Columbia University Press. pp. 81–101. ISBN978-0-88033-220-0.
Treptow, Kurt W. (2000). Vlad III Dracula: The Life and Times of the Historical Dracula. The Center of Romanian Studies. ISBN978-973-98392-2-8.
Further reading
Trow, M. J. (2003). Vlad the Impaler: In Search of the Real Dracula. The History Press. ISBN978-1-910670-08-8. | Wilkinson's book entitled Account of the Principalities of Wallachia and Moldavia with Political Observations Relative to Them, published in 1820.[195][196]
Stoker "apparently did not know much about" Vlad the Impaler, "certainly not enough for us to say that Vlad was the inspiration for" Count Dracula, according to Elizabeth Miller.[197] For instance, Stoker wrote that Dracula had been of Székely origin only because he knew about both Attila the Hun's destructive campaigns and the alleged Hunnic origin of the Székelys.[198] Stoker's main source, Wilkinson, who accepted the reliability of the German stories, described Vlad as a wicked man.[199] Actually, Stoker's working papers for his book contain no references to the historical figure,[196] the name of the character being named in all drafts but the later ones 'Count Wampyr'. Consequently, Stoker borrowed the name and "scraps of miscellaneous information" about the history of Wallachia when writing his book about Count Dracula.[196]
Appearance and representations
Pope Pius II's legate, Niccolò Modrussa, painted the only extant description of Vlad, whom he had met in Buda.[200] A copy of Vlad's portrait has been featured in the "monster portrait gallery" in the Ambras Castle at Innsbruck.[201] The picture depicts "a strong, cruel, and somehow tortured man" with "large, deep-set, dark green, and penetrating eyes", according to Florescu.[201] The colour of Vlad's hair cannot be determined because Modrussa mentions that Vlad was black-haired, while the portrait seems to show that he had fair hair.[201] The picture depicts Vlad with a large lower lip.[201]
Vlad's bad reputation in the German-speaking territories can be detected in a number of Renaissance paintings.[202] He was portrayed among the witnesses of Saint Andrew's martyrdom in a 15th-century painting, displayed in the Belvedere in Vienna. [ | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://www.livescience.com/40843-real-dracula-vlad-the-impaler.html | Vlad the Impaler: The real Dracula | Live Science | Legends of vampires go back centuries, but few names have cast more terror into the human heart than Dracula. However the fictional character, created by author Bram Stoker, was in fact based on a real historical figure called Vlad the Impaler.
Vlad the Impaler, also known as Vlad III, Prince of Wallachia, was a 15th-century warlord, in what today is Romania, in south-eastern Europe. Stoker used elements of Vlad's real story for the title character of his 1897 novel "Dracula." The book has since inspired countless horror movies, television shows and other bloodcurdling tales. However, according to historians and literary scholars, such as Elizabeth Miller who has studied the link between Stoker's character and Vlad III, the two Draculas don't really have much in common.
Who was the real Dracula?
Vlad the Impaler is believed to have been born in 1431 in what is now Transylvania, the central region of modern-day Romania. However, the link between Vlad the Impaler and Transylvania is a matter of some debate, according to Florin Curta, a professor of medieval history and archaeology at the University of Florida.
"Dracula is linked to Transylvania, but the real, historic Dracula — Vlad III — never owned anything in Transylvania," Curta told Live Science. Bran Castle, a modern-day tourist attraction in Transylvania that is often referred to as Dracula's castle, was never the residence of the Wallachian prince, he added.
This painting, "Vlad the Impaler and the Turkish Envoys," by Theodor Aman (1831-1891), allegedly depicts a scene in which Vlad III nails the turbans of these Ottoman diplomats to their heads. (Image credit: Public domain)
"Because the castle is in the mountains in this foggy area and it looks spooky, it's what one would expect of Dracula's castle," Curta said. "But he [Vlad III] never lived there. He never even stepped foot there."
Vlad III's father, Vlad II, did own a residence in Sighişoara, Transylvania, but it is not certain that Vlad III was born there, according to Curta. It's also possible, he said, that Vlad the Impaler was born in Târgovişte, which was at that time the royal seat of the principality of Wallachia, where his father was a "voivode," or ruler. There is also Castelul Corvinilor, also known as Castle Corvin, where Vlad may have been imprisoned by Hungarian Governor John Hunyadi.
It is possible for tourists to visit one castle where Vlad III certainly spent time. At about age 12, Vlad III and his brother were imprisoned in Turkey. In 2014, archaeologists found the likely location of the dungeon, according to Smithsonian Magazine. Tokat Castle is located in northern Turkey. It is an eerie place with secret tunnels and dungeons that is currently under restoration and open to the public.
Where does the name Dracula come from?
In 1431, King Sigismund of Hungary, who would later become the Holy Roman Emperor, according to the British Museum, inducted the elder Vlad into a knightly order, the Order of the Dragon. This designation earned Vlad II a new surname: Dracul. The name came from the old Romanian word for dragon, "drac."
His son, Vlad III, would later be known as the "son of Dracul" or, in old Romanian, Drăculea, hence Dracula, according to Historian Constantin Rezachevici ("From the Order of the Dragon to Dracula" Journal of Dracula Studies, Vol 1, 1999). In modern Romanian, the word "drac" refers to the Devil, Curta said.
According to "Dracula: Sense and Nonsense" (Desert Island Books, 2020) by Elizabeth Miller, in 1890 Stoker read a book about Wallachia. Although it did not mention Vlad III, Stoker was struck by the word "Dracula." He wrote in his notes, "in Wallachian language means DEVIL." It is therefore likely that Stoker chose to name his character Dracula for the word's devilish associations.
The theory that Vlad III and Dracula were the same person was developed and popularized by historians Radu Florescu and Raymond T. McNally in their book "In Search of Dracula” (The New York Graphic Society, 1972). Though far from accepted by all historians, the thesis took hold of the public imagination, according to The New York Times.
According to Constantin Rezachevici, the Order of the Dragon was devoted to a singular task: the defeat of the Turkish, or Ottoman Empire. Situated between Christian Europe and the Muslim lands of the Ottoman Empire, Vlad II's (and later Vlad III's) home principality of Wallachia was frequently the scene of bloody battles as Ottoman forces pushed westward into Europe, and Christian forces repulsed the invaders.
Years of captivity
When Vlad II was called to a diplomatic meeting in 1442 with Ottoman Sultan Murad II, he brought his young sons Vlad III and Radu along. But the meeting was actually a trap: All three were arrested and held hostage. The elder Vlad was released under the condition that he leave his sons behind. James S. Kessler ("Echoes of Empire," Lulu Publishing, 2016) argues that Vlad II "sent Vlad Junior and his brother Radu cel Frumos as 'royal hostages' to the Ottoman court."
"The sultan held Vlad and his brother as hostages to ensure that their father, Vlad II, behaved himself in the ongoing war between Turkey and Hungary," said Miller, a research historian and professor emeritus at Memorial University of Newfoundland in Canada.
Under the Ottomans, Vlad and his younger brother were tutored in science, philosophy and the arts. According to Radu Florescu and Raymond McNally, Vlad also became a skilled horseman and warrior.
"They were treated reasonably well by the current standards of the time," Miller said. "Still, [captivity] irked Vlad, whereas his brother sort of acquiesced and went over to the Turkish side. But Vlad held enmity, and I think it was one of his motivating factors for fighting the Turks: to get even with them for having held him captive."
Vlad the Prince
A bust of Vlad III that sits in the centre of Sighisoara, Romania, one of the many locations that claims to be the birthplace of the prince of Wallachia. (Image credit: David Greedy / Stringer via Getty)
While Vlad and Radu were in Ottoman hands, Vlad's father was fighting to keep his place as voivode of Wallachia, a fight he would eventually lose. In 1447, Vlad II was ousted as ruler of Wallachia by local noblemen (boyars) and was killed in the swamps near Bălteni, halfway between Târgovişte and Bucharest in present-day Romania, according to John Akeroyd ("The Historical Dracula", History Ireland, Vol 17 No.2, 2009). Vlad's older half-brother, Mircea, was killed alongside his father.
Not long after these harrowing events, in 1448, Vlad embarked on a campaign to regain his father's seat from the new ruler, Vladislav II. His first attempt at the throne relied on the military support of the Ottoman governors of the cities along the Danube River in northern Bulgaria, according to Curta. Vlad also took advantage of the fact that Vladislav was absent at the time, having gone to the Balkans to fight the Ottomans for the governor of Hungary at the time, John Hunyadi.
Vlad won back his father's seat, but his time as ruler of Wallachia was short-lived. He was deposed after only two months, when Vladislav II returned and took back the throne of Wallachia with the assistance of Hunyadi, according to Curta.
Little is known about Vlad III's whereabouts between 1448 and 1456. But it is known that he switched sides in the Ottoman-Hungarian conflict, giving up his ties with the Ottoman governors of the Danube cities and obtaining military support from King Ladislaus V of Hungary, who happened to dislike Vlad's rival — Vladislav II of Wallachia — according to Curta. Meanwhile, Vladislav II sought aid from Ottoman ruler Mehmed II.
Vlad III's political and military tack truly came to the forefront amid the fall of Constantinople in 1453. After the fall, the Ottomans were in a position to invade all of Europe. In July 1456, as the Ottomans and Hunyadi’s forces were locked in battle, Vlad led a small force of exiled boyars, Hungarians and Romanian mercenaries against his old enemy Vladislav II at Târgoviște, according to McNally and Florescu in "Dracula, Prince of Many Face" (Little, Brown and Company, 1990). "He had the satisfaction of killing his mortal enemy and his father’s assassin in hand-to-hand combat," they wrote.
Vlad, who had already solidified his anti-Ottoman position, was proclaimed voivode of Wallachia in 1456, according to Elizabeth Miller ("A Dracula Handbook," Xlibris, 2005). One of his first orders of business in his new role was to stop paying an annual tribute to the Ottoman sultan — a measure that had formerly ensured peace between Wallachia and the Ottomans.
Why is Vlad called "The Impaler"?
A woodcut from a 1499 pamphlet depicts Vlad III dining among the impaled corpses of his victims. (Image credit: Public Domain)
To consolidate his power as voivode, Vlad needed to quell the incessant conflicts that had historically taken place between Wallachia's boyars. According to Constantin Rezachevici ("Dracula: Essays on the Life and Times of Vlad the Impaler" Center for Romanian Studies, 2019) "during a banquet given by him at the palace in Târgoviște, Vlad the Impaler ordered the impaling of some 500 Boyars (perhaps only really 50) with the accusation that their ‘shameless disunity’ was the cause of the frequent changing of the princes in Wallachia".
This is just one of many gruesome events that earned Vlad his posthumous nickname, Vlad the Impaler. This story, and others like it, is documented in printed material from around the time of Vlad III's rule, according to Miller.
"In the 1460s and 1470s, just after the invention of the printing press, a lot of these stories about Vlad were circulating orally, and then they were put together by different individuals in pamphlets and printed," Miller said.
Whether or not these stories are wholly true or significantly embellished is debatable, Miller added. After all, many of those printing the pamphlets were hostile to Vlad III. But some of the pamphlets from this time tell almost the exact same gruesome stories about Vlad, leading Miller to believe that the tales are at least partially historically accurate. Some of these legends were also collected and published in a book, "The Tale of Dracula," in 1490, by a monk who presented Vlad III as a fierce, but just ruler.
Vlad is credited with impaling dozens of Saxon merchants in Kronstadt (present-day Braşov, Romania), who were once allied with the boyars, in 1456, according to Kristen Wright ("Disgust and Desire: The Paradox of the Monster," Brill Rodopi, 2018). Around the same time, a group of Ottoman envoys allegedly had an audience with Vlad but declined to remove their turbans, citing a religious custom.
Commending them on their religious devotion, Vlad ensured that their turbans would forever remain on their heads by reportedly having the head coverings nailed to their skulls, according to McNally and Florescu.
"After Mehmet II — the one who conquered Constantinople — invaded Wallachia in 1462, he actually was able to go all the way to Wallachia's capital city of Târgoviște but found it deserted. And in front of the capital he found the bodies of the Ottoman prisoners of war that Vlad had taken — all impaled," Curta said.
The Battle With Torches by Romanian artist Theodor Aman depicts the nighttime raid of Vlad III against Mehmed II as he sought to end the Ottoman invasion of Wallachia. (Image credit: Public Domain/Muzeul Theodor Aman)
In one battle on June 17th, 1462, known as the Night Attack at Târgoviște, Vlad III and Mehmed II’s forces fought from three hours after sunset until about four in the morning, at the foothills of the Carpathian Mountains, according to McNally and Florescu. The attack was an attempt to assassinate Mehmed II, but using only torches and flares, the Wallachian forces were unable to locate his tent and the alarm was raised. McNally and Florescu estimate 5,000 of Vlad men were lost to 15,000 Ottomans, but point out that it was, "an act of extraordinary temerity, which is celebrated in Romanian literature and popular folklore."
Vlad's victories over the invading Ottomans were celebrated throughout Wallachia, Transylvania and the rest of Europe — even Pope Pius II was impressed.
"The reason he's a positive character in Romania is because he is reputed to have been a just, though a very harsh, ruler," Curta said.
How did Vlad the Impaler die?
Not long after the impalement of Ottoman prisoners of war, in August 1462, Vlad was forced into exile in Hungary, unable to defeat his much more powerful adversary, Mehmet II. Vlad was imprisoned for a number of years during his exile, though during that same time he married and had two children.
Vlad's younger brother, Radu, who had sided with the Ottomans during the ongoing military campaigns, took over governance of Wallachia after his brother's imprisonment. But after Radu's death in 1475, local boyars, as well as the rulers of several nearby principalities, favored Vlad's return to power, according to John M Shea ("Vlad the Impaler: Bloodthirsty Medieval Prince," (Gareth Stevens Publishing, 2015).
In 1476, with the support of the voivode of Moldavia, Stephen III the Great (1457-1504), Vlad made one last effort to reclaim his seat as ruler of Wallachia. He successfully stole back the throne, but his triumph was short-lived. Later that year, while marching to yet another battle with the Ottomans, Vlad and a small vanguard of soldiers were ambushed, and Vlad was killed.
The church of Santa Maria La Nova in Naples is one of a number of locations where the remains of Vlad III are claimed to have been buried. (Image credit: Marco Cantile / Contributor via Getty Images)
There is much controversy over the location of Vlad III's tomb, according to Constantin Rezachevici in a study published in 2002 in the Journal of Dracula Studies. It is said he was buried in the monastery church in Snagov, on the northern edge of the modern city of Bucharest, in accordance with the traditions of his time. But recently, historians have questioned whether Vlad might actually be buried at the Monastery of Comana, between Bucharest and the Danube, which is close to the presumed location of the battle in which Vlad was killed, according to Curta.
One thing is for certain, however: unlike Stoker's Count Dracula, Vlad III most definitely did die. Only the harrowing tales of his years as ruler of Wallachia remain to haunt the modern world.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. | Legends of vampires go back centuries, but few names have cast more terror into the human heart than Dracula. However the fictional character, created by author Bram Stoker, was in fact based on a real historical figure called Vlad the Impaler.
Vlad the Impaler, also known as Vlad III, Prince of Wallachia, was a 15th-century warlord, in what today is Romania, in south-eastern Europe. Stoker used elements of Vlad's real story for the title character of his 1897 novel "Dracula." The book has since inspired countless horror movies, television shows and other bloodcurdling tales. However, according to historians and literary scholars, such as Elizabeth Miller who has studied the link between Stoker's character and Vlad III, the two Draculas don't really have much in common.
Who was the real Dracula?
Vlad the Impaler is believed to have been born in 1431 in what is now Transylvania, the central region of modern-day Romania. However, the link between Vlad the Impaler and Transylvania is a matter of some debate, according to Florin Curta, a professor of medieval history and archaeology at the University of Florida.
"Dracula is linked to Transylvania, but the real, historic Dracula — Vlad III — never owned anything in Transylvania," Curta told Live Science. Bran Castle, a modern-day tourist attraction in Transylvania that is often referred to as Dracula's castle, was never the residence of the Wallachian prince, he added.
This painting, "Vlad the Impaler and the Turkish Envoys," by Theodor Aman (1831-1891), allegedly depicts a scene in which Vlad III nails the turbans of these Ottoman diplomats to their heads. (Image credit: Public domain)
"Because the castle is in the mountains in this foggy area and it looks spooky, it's what one would expect of Dracula's castle," Curta said. "But he [Vlad III] never lived there. He never even stepped foot there. | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://www.nationalgeographic.co.uk/history-and-civilisation/2021/11/vlad-the-impalers-thirst-for-blood-was-an-inspiration-for-count-dracula | Vlad the Impaler's thirst for blood was an inspiration for Count Dracula | Vlad the Impaler's thirst for blood was an inspiration for Count Dracula
The ruthless brutality of Vlad III of Walachia, forged by the 15th-century clash between the Kingdom of Hungary and the Ottoman Empire, would partly inspire Bram Stoker's classic vampire novel centuries later.
This well-known portrait of Vlad IIIâwearing a princely cap adorned with pearls and precious stonesâis a copy of one painted during his lifetime (1431-1476) now displayed in Ambras Castle in Innsbruck, Austria.
Dracula, prince of darkness, lord of the undead! This mythical character leaped onto the page from the fevered imagination of Irish writer Bram Stoker in 1897. But the historical figure who shares a name with the literary icon was no less fearsome. Vlad III Draculea was the voivode (a prince-like military leader) of Walachiaâa principality that joined with Moldavia in 1859 to form Romaniaâon and off between 1448 and 1476. Also known as Vlad III, Vlad Dracula (son of the Dragon), andâmost famouslyâVlad the Impaler (Vlad Tepes in Romanian), he was a brutal, sadistic leader famous for torturing his foes. By some estimates he is responsible for the deaths of more than 80,000 people in his lifetimeâa large percentage of them by impalement.
Vlad III was likely born in Sighisoara, a small medieval city founded by Saxon settlers in the Transylvania region of present-day Romania, pictured here. The city center is a UNESCO World Heritage site.
Photograph by Doug Pearson/Getty Images
Vlad IIIâs cruelty was real, but his reputation as a villain spread through 15th-century Europe thanks to the printing press, whose rise coincided with his reign. Propagandist pamphlets written by his enemies became best sellers. Centuries later, the sinister reputation of Vlad the Impaler took on new life when Stoker came across the name Dracula in an old history book, learned that it could also mean âdevilâ in Walachia, and gave the name to his fictional vampire. Yet today Vlad III is something of a national hero in Romania, where he is remembered for defending his people from foreign invasion, whether Turkish soldiers or German merchants.
Family history
Vlad III, the second of four brothers, was likely born in 1431 in Transylvania, a craggy, verdant part of present-day Romania (it officially became part of that country in 1947). His mother was Princess Cneajna of Moldavia. His father, Vlad II, was an illegitimate son of a Walachian noble who spent his youth at the court of Sigismund of Luxembourg, king of Hungary and future Holy Roman emperor.
A 1440 saddle is pictured bearing the symbol of the Order of the Dragon. It's housed at the Royal Armouries Museum in Leeds, England.
The same year that Vlad III was born, his father was admitted to the Order of the Dragon. Like other chivalric orders, this Christian military society, founded in 1408 by Sigismund, was modelled broadly on the medieval crusaders; its members were 24 high-ranking knights pledged to fight heresy and stop Ottoman expansion. Upon joining the order, Vlad II was granted the surname Dracul (Dragon). His son Vlad III was known as Vlad Draculea, or Dracula, âson of the Dragon.â In 1436 Sigismund made Vlad II voivode of Walachia, but Vlad II did not stay loyal. He soon switched sides and allied himself with Ottoman leader Sultan Murad II. To guarantee loyalty, Murad required Vlad II to hand over two of his sons, Vlad III and Radu the Fair.
Also known as Matthias Corvinus, Matthias I (pictured in a ca 1485 marble relief housed at the Hungarian National Gallery) held Vlad III captive for 12 years on the false grounds that he had collaborated with the Ottoman Turks to attack Hungary.
In 1447 Vlad II was ousted as ruler of Walachia by local boyars, or aristocrats, and subsequently captured and killed. That same year, Vlad IIIâs older brother, Mircea II, was blinded and buried alive. Janos Hunyadi, regent of Hungary, who had instigated Vlad IIâs assassination, appointed Vladislav II, another Walachian nobleman, to be the new voivode. Historians cannot say for sure if these events motivated Vlad IIIâs thirst for revenge, but one thing is clear: Soon after he was released from Ottoman captivity, around 1447, Vlad III began his fight for power.
In 1448, with Ottoman help, Vlad III, then 16 years old, expelled Vladislav II from Walachia and ascended the throne. He lasted only two months as voivode before the Hungarians reinstated Vladislav. Vlad III went into exile; little is known about his next eight years, as he moved around the Ottoman Empire and Moldavia.
Sometime during this period he seems to have switched sides in the Ottoman-Hungarian conflict, gaining the military support of Hungary. Vladislav II changed allegiances, too, and joined the Turksâa move that set up a clash between the two claimants to the throne of Walachia. Vlad III met Vladislav on the outskirts of Targoviste on July 22, 1456, and beheaded him during hand-to-hand combat. Vlad IIIâs rule had begun.
Rule of terror
Walachia had been ravaged by the ceaseless Ottoman-Hungarian conflict and the internecine strife among feuding boyars. Trade had ceased, fields lay fallow, and the land was overrun by lawlessness. Vlad III began his reign with a strict crackdown on crime, employing a zero-tolerance policy for even minor offences, such as lying. He handpicked commoners, even foreigners, for public positions, a move to cement power by creating officials who were completely dependent on him. As voivode, he could appoint, dismiss, and even execute his new officials at will.
As for the boyarsâthe high-ranking figures who had killed his father and older brotherâ Vlad III had a retributive plan. In 1459 he invited 200 of them to a great Easter banquet, together with their families. There, he had the women and the elderly stabbed to death and impaled; the men he forced into slave labour. Many of these workers would die of exhaustion while building Poenari Castle, one of Vlad IIIâs favourite residences.
To replace the boyars, Vlad III created new elites: the viteji, a military division made up of farmers who had distinguished themselves on the battlefield, and the sluji, a kind of national guard. He also liberated Walachiaâs peasants and artisans, freeing them from the tributes that they used to pay to the Ottoman Empire.
Which is not to say that Vlad IIIâs domestic policies were benevolent. The brutal justice meted out to his enemies was sometimes applied to his own people as well. To get rid of homeless people and beggars, whom he viewed as thieves, he invited a large number to a feast, locked the doors, and burned them all alive. He exterminated Romanies or had them forcibly enlisted into the army. He imposed heavy tax burdens on the German population and blocked their trade when they refused to pay.
Denunciations of Vlad III (as in this 1488 pamphlet) tended to stem from the voivodeâs detractors and political enemiesâeither German sources or the court of Matthias I, king of Hungary.
Many of the Germans under Vlad IIIâs aegis were Saxons. Not to be confused with the Anglo-Saxons of England, these were German migrants who had settled in Transylvania in the 12th century after the region was conquered by Hungary. They were mostly well-to-do merchants, but to Vlad III, they were allies of his enemies.
Over the next few years, Vlad III razed entire Saxon villages and impaled thousands of people. In 1459, when the Transylvanian Saxon city of Kronstadt (today Brasov) supported a rival of Vlad IIIâs, the voivodeâs response was savage. After initially placing trade restrictions on Saxon goods in Walachia, he had 30,000 people impaledâand reportedly dined among them so he could witness their suffering personally. He also had Kronstadt burned to the ground. Back in Walachia, he impaled Saxon merchants who violated his trade laws.
Although Vlad continued to identify himself with the prestigious Order of the Dragon, signing his name Wladislaus Dragwlya (âson of the Dragonâ), his enemies at this time gave him the less noble sobriquet Tepesââthe Impaler.â
Vlad III mounted several bloody attacks against Catholic communities, too, and had the support of many of his people who, as Orthodox Chris- tians, felt discriminated against by Hungarians and Saxon Catholics in Transylvania. Cities including Sibiu, Tara Barsei, Amlas, and Fagara were targeted and suffered many losses before surrendering in 1460. These reprisals came to the attention of Pope Pius II, who produced a report in 1462 claiming that Vlad III had killed some 40,000 people.
Drastic measures
Vlad IIIâs foreign policy differed from that of his father, and from many other leaders of the time. He never stopped opposing the Turksâin this he had the support of Matthias Corvinus, aka Matthias I, son of Janos Hunyadi, and king of Hungary.
Vlad IIIâs tactics, both on and off the battlefield, against the Turks were extraordinarily brutal. In 1459 Mehmed II sent an embassy to Vlad III, claiming a tribute of 10,000 ducats and 300 young boys. When the diplomats declined to remove their turbans, citing religious custom, Vlad III saluted their devotionâ by nailing their hats to their heads. In 1461 the Turks offered to meet Vlad for a peace parley; in reality they intended to ambush him. Vlad III responded with a foray into the Turkish dominions south of the Danube.
Vlad IIIâs archenemy Sultan Mehmed II had this portrait painted in 1480 by Gentile Bellini, an Italian painter of the Venetian school. It's now at the National Gallery in London.
In the spring of 1462, Mehmed II assembled an army of 90,000 men and advanced on Walachia. After conducting a series of night raids and guerrilla warfare, Vlad III employed his trademark tactic, impaling more than 23,000 prisoners with their families and putting them on display along the enemyâs route, outside the city of Targoviste.âThere were infants affixed to their mothers on the stakes,â writes the French historian Matei Cazacu, âand birds had made their nests in their entrails.âThe sight was so horrifying that Mehmed II, after seeing the âforestâ of the dead, turned around and marched back to Constantinople. Vlad III wrote to Matthias I explaining that he had âkilled peasants, men and women, old and young . . . We killed 23,884 Turks, without counting those whom we burned in homes or the Turks whose heads were cut by our soldiers.â To prove the truth of his words, he produced sacks full of severed noses and ears. As Vlad III himself recognised, most of the victims were simple peasantsâSerbian Christians and Bulgarians who had been subjugated by the Turks.
The Turks ultimately prevailed because the Walachian boyars had defected to Radu, Vlad IIIâs brother. Radu guaranteed the aristocracy that by siding with the Ottomans, they would regain the privileges that Vlad III had stripped from them. Radu attracted support from the Romanian population, who were tired of Vlad IIIâs bloodlust.
Vlad IIIâs power, money, and troops had ebbed away so much that Matthias I was able to take him prisoner in 1462. Vlad was imprisoned in Hungary for 12 years, while power changed hands several times in Walachia. Around 1475 Matthias I sent Vlad III to recover Walachia for Hungary. In November 1476 Vlad III scored an initial victory, but one month later suffered a brutal defeat. His rival, backed by Ottoman troops, ambushed, killed, and beheaded him. By most accounts his severed head was sent to Mehmed II in Constantinople to be put on display above the cityâs gates.
Vlad III dines amid impaled victims following his assault on Brasov (then known as Kronstadt). Printed in Nuremberg in 1499, this engraving, and others like it, helped spread Vlad IIIâs gruesome reputation across Europe.
Despite all that, Vlad III might have been a mere footnote of the Middle Ages if it were not for a book published in 1820. Written by William Wilkinson, the British consul to Walachia, An Account of the Principalities of Wallachia and Moldavia: With Various Political Observations Relating to Them delves into the regionâs history and mentions the notorious warlord Vlad the Impaler.
Bram Stoker never visited Vladâs homeland but was known to have come across Wilkinsonâs book in 1890. Afterward, he wrote the following: âVoivode (Dracula): Dracula in Wallachian language means DEVIL. Wallachians were accustomed to give it as a surname to any person who rendered himself conspicuous either by courage, cruel actions, or cunning.â While the life of Vlad the Impaler had long since ended, the enduring legend of Dracula was just beginning.
A monstrous reputation
Was Vlad Dracula the only inspiration for Bram Stokerâs best-selling vampire?
Irish writer Bram Stoker published a novel in 1897, set in Transylvania, with a mysterious vampire as its hypnotic villain. Dracula thrilled readers who began to speculate on the source of Stokerâs inspiration. Many theorised that the bloody life of Vlad the Impaler, the medieval Walachian ruler also known as Dracula, was Stokerâs sole basis for the character, but Stoker drew on many sources for his most infamous creation.
Vampires were in vogue in the late Victorian period, and Stoker would have likely been familiar with earlier Gothic works such as Goetheâs poem The Bride of Corinth (1797); âThe Vampyreâ (1819), a short story by John W. Polidori; and the novella Carmilla (1872), by Joseph Sheridan Le Fanu. The notable connections between Dracula and Captain Vampire (1879)âa novel written 18 years before Stokerâs book, by 19-year-old Marie Nizet, a Belgian woman related to Romanian exilesâhave also been pointed out.
Bram Stoker is pictured in a 1906 colorized photogavure.
Photograph by Granger Collection, ACI
Stoker had certainly read about vampirism in the Carpathians, and in 1890 was writing a novel called The Un-Dead, about a fictional character he called Count Wampyr. While on vacation that year in Whitby, England, Stoker found a rare book in the local library titled An Account of the Principalities of Wallachia and Moldavia (1820), written by British diplomat William Wilkinson. It mentioned the voivode Dracula, explaining in a footnote that in the Walachian language, dracul means âdevil,â while in Hungarian it means âdragon.â
Stoker soon transformed his Wampyr into Dracula. In the course of the novel, when the titular character sketches a historical panorama to his guest Jonathan Harker, it is surely Wilkinsonâs account that lies behind it. Given all these connections, it seems highly likely that Vlad IIIâs life must have provided some material for Stokerâs novel.
More recently, however, some scholars and historians have advanced an intriguing alternative theory for the primary source for Dracula: a 19th-century cholera epidemic that killed up to 1,000 people in the town of Sligo, in western Ireland. Stokerâs mother, Charlotte Thornley, survived it as a 14-year-old girl and later described it for her son in grisly detail.
In 2018 Irish researchers, led by Marion McGarry of the Sligo Stoker Society, studied Thornleyâs writing. âBram as an adult asked his mother to write down her memories of the epidemic for him,â wrote McGarry, âand he supplemented with his own research of Sligoâs epidemic.â
The outbreak caused pandemonium. To stop people from fleeing Sligo and spreading the plague, officials dug trenches around the town and blocked off the roads. Corpses lay in the street. Doctors and nurses took cholera patients, stupefied by opium or laudanum, and prematurely placed them in mass graves.
Stoker was fascinated by his motherâs description of cholera victims who were buried aliveâa link, perhaps, to Draculaâs undead state. In a rare interview about Dracula, the famously private Stoker acknowledged that his story was âinspired by the idea of someone being buried before they were fully dead.â
Stokerâs horror novel inflamed imaginations, and decades later, Bela Lugosiâs iconic 1931 portrayal of Count Dracula on the silver screen made the word Dracula synonymous with vampire. The Hungarian actorâs accent and dark good looks brought the Transylvanian count to life and inspired countless imitations.
Bran Castle is pictured in Transylvania in northwestern Romania.
Photograph by Dea, Album
Places associated with the Dracula legend are popular destinations. Some, like Poenari Castle in Romania, which was an important fortress for the voivode, are associated with Vlad III. Other locations, like Bran Castle (popularly known as Castle Dracula) in Romania have no connection with Vlad III. Some argue that Bran Castle was Stokerâs inspiration for Draculaâs home in the novel, but Stoker never visited Romania, so he could not have seen it in person. | s inspiration. Many theorised that the bloody life of Vlad the Impaler, the medieval Walachian ruler also known as Dracula, was Stokerâs sole basis for the character, but Stoker drew on many sources for his most infamous creation.
Vampires were in vogue in the late Victorian period, and Stoker would have likely been familiar with earlier Gothic works such as Goetheâs poem The Bride of Corinth (1797); âThe Vampyreâ (1819), a short story by John W. Polidori; and the novella Carmilla (1872), by Joseph Sheridan Le Fanu. The notable connections between Dracula and Captain Vampire (1879)âa novel written 18 years before Stokerâs book, by 19-year-old Marie Nizet, a Belgian woman related to Romanian exilesâhave also been pointed out.
Bram Stoker is pictured in a 1906 colorized photogavure.
Photograph by Granger Collection, ACI
Stoker had certainly read about vampirism in the Carpathians, and in 1890 was writing a novel called The Un-Dead, about a fictional character he called Count Wampyr. While on vacation that year in Whitby, England, Stoker found a rare book in the local library titled An Account of the Principalities of Wallachia and Moldavia (1820), written by British diplomat William Wilkinson. It mentioned the voivode Dracula, explaining in a footnote that in the Walachian language, dracul means âdevil,â while in Hungarian it means âdragon.â
Stoker soon transformed his Wampyr into Dracula. In the course of the novel, when the titular character sketches a historical panorama to his guest Jonathan Harker, it is surely Wilkinsonâs account that lies behind it. Given all these connections, it seems highly likely that Vlad IIIâs life must have provided some material for Stokerâs novel.
| yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://www.marca.com/en/lifestyle/uk-news/2022/10/22/63545731268e3e911a8b45b2.html | King Charles III is real life Count Dracula's descendant, he owns ... | The real life Count Dracula, a fictional character who was based on Vlad Tepes is actually related to King Charles III. Bet only a few people knew that but it's completely true, the King is actually the great great grandson 16 times removed of Vlad 'The Impaler'. There were many Royal Family stories that have emerged since King Charles III took the throne but this one has to be the most random one of them all.
Tepes was the ruler of Wallachia, a Romanian country from the 15th century that eventually became Transylvania. He became famous during his time due to his unique torturing methods of impaling enemies in wooden spikes. Such terrible "traditions" that led him to be linked to Bram Stocker's 'Dracula' character for many years. Too many, he is the historical figure that served as the inspiration for the literary creation.
Charles enters Buckingham Palace for the first time as King
How is King Charles III related to Vlad Tepes?
According to the Romania Tour Store, King Charles III is related to Tepes through Queen Mary. She was the consort of George V and a direct descendant of Tepes. King Charles III has visited Transylvania before and he's even boasted about his relation to the ruthless warlord. His predilection to te country is so great that he often travels there and even has real estate for rent.
The King purchased and restored an 18th-century Saxon cottage in the village of Viscri, in Transylvania. He visited the region for the first time in 1998 and completely fell in love with the place. This home is formerly known as the Zalan Guesthouse, it's available for overnight stays and proceeds go toward his local charitable foundation.
Now that he's taken over the throne, it will be almost impossible for King Charles III to visit that country. All his commitments to the Royal Family and the Crown won't allow him to go as often as he would like. By the way, Charles is the only one who has this fascination for Transylvania. All his siblings, Queen Elizabeth II, and King George VI's generations are related to Vlad 'The Impaler'. | The real life Count Dracula, a fictional character who was based on Vlad Tepes is actually related to King Charles III. Bet only a few people knew that but it's completely true, the King is actually the great great grandson 16 times removed of Vlad 'The Impaler'. There were many Royal Family stories that have emerged since King Charles III took the throne but this one has to be the most random one of them all.
Tepes was the ruler of Wallachia, a Romanian country from the 15th century that eventually became Transylvania. He became famous during his time due to his unique torturing methods of impaling enemies in wooden spikes. Such terrible "traditions" that led him to be linked to Bram Stocker's 'Dracula' character for many years. Too many, he is the historical figure that served as the inspiration for the literary creation.
Charles enters Buckingham Palace for the first time as King
How is King Charles III related to Vlad Tepes?
According to the Romania Tour Store, King Charles III is related to Tepes through Queen Mary. She was the consort of George V and a direct descendant of Tepes. King Charles III has visited Transylvania before and he's even boasted about his relation to the ruthless warlord. His predilection to te country is so great that he often travels there and even has real estate for rent.
The King purchased and restored an 18th-century Saxon cottage in the village of Viscri, in Transylvania. He visited the region for the first time in 1998 and completely fell in love with the place. This home is formerly known as the Zalan Guesthouse, it's available for overnight stays and proceeds go toward his local charitable foundation.
Now that he's taken over the throne, it will be almost impossible for King Charles III to visit that country. All his commitments to the Royal Family and the Crown won't allow him to go as often as he would like. By the way, Charles is the only one who has this fascination for Transylvania. All his siblings, Queen Elizabeth II, and King George VI's generations are related to Vlad 'The Impaler'. | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://globalvolunteers.org/dracula-in-real-life/ | Dracula in Real Life - Global Volunteers Service Programs | Dracula Wasn’t a Bad Guy, He Was Just Misunderstood
This is the fourth part to our series on Myths and Legends. Moving to Romania, here we discuss the legend of Dracula and Dracula in real life.
We have all, if not seen, at least heard of movies and novels about Dracula, the blood-sucking monster of Transylvania. This is of course one of the most famous legends taking place in Romania, but it isn’t a Romanian Legend. The blood-sucking monster story is more an Irish creation – by novelist Bram Stoker – than anything. Is Dracula then a fiction character, and we can all sleep well at night? No. Dracula was real, and he was born in Transylvania, but that’s about as close as the Dracula from the movies gets to Dracula in real life.
Dracula in Real Life
Dracula was a real person, more commonly known in medieval Romania as Vlad III, Prince of Wallachia or Vlad the Impaler. I know, the “impaler” is not a nice nickname, and unfortunately Vlad III did like to impale people and was famous for it. But nobody is perfect. In fact, overall, Vlad III was known as a just ruler and is actually a figure of heroism in Romania. He was known to be a harsh ruler, and brutal with his enemies, but just to his people and incredibly brave.
Vlad III ruled in Romania in the 15th century. His father Vlad II, was knighted under the Order of the Dragon, which gave him a new surname: Dracul, as the old Romanian word for dragon was drac. Naturally, his son, Vlad III, was called “son of Dracul” or Drăculea in old Romanian. You get it now. But to make matters worse for Vlad III, in modern Romanian drac means “the devil, ” so Vlad III has been mistakenly believed to be called “son of the devil.”
Inside “Dracula’s Castle.” The real life Dracula didn’t live in this castle, but it still is a really cool castle. Romania.
In reality, the Order of the Dragon had nothing to do with the devil, but everything to do with fighting the Ottoman Empire. As you might remember, the crusades were a thing during 15th century Europe, so it was common for European kings to fight against the Turks. Although Vlad III had a small army compared to the Ottomans, he had good tactics, was brutal and brave, so for a long time he was able to repel the Ottoman invasion. But eventually he was overcome by the vastness of the Turk army and killed.
The story of Dracula is a Romanian legend, not of blood-sucking monsters but of bravery and heroism. Whether you think Dracula in real life was better or worse than the Dracula from the movies, that doesn’t take away from the thrill around this legend, in either version. Moreover, this only adds to the richness of the Romanian culture. There is so much Dracula touring you can do in Romania before or after your service program. What are you waiting for? | Dracula Wasn’t a Bad Guy, He Was Just Misunderstood
This is the fourth part to our series on Myths and Legends. Moving to Romania, here we discuss the legend of Dracula and Dracula in real life.
We have all, if not seen, at least heard of movies and novels about Dracula, the blood-sucking monster of Transylvania. This is of course one of the most famous legends taking place in Romania, but it isn’t a Romanian Legend. The blood-sucking monster story is more an Irish creation – by novelist Bram Stoker – than anything. Is Dracula then a fiction character, and we can all sleep well at night? No. Dracula was real, and he was born in Transylvania, but that’s about as close as the Dracula from the movies gets to Dracula in real life.
Dracula in Real Life
Dracula was a real person, more commonly known in medieval Romania as Vlad III, Prince of Wallachia or Vlad the Impaler. I know, the “impaler” is not a nice nickname, and unfortunately Vlad III did like to impale people and was famous for it. But nobody is perfect. In fact, overall, Vlad III was known as a just ruler and is actually a figure of heroism in Romania. He was known to be a harsh ruler, and brutal with his enemies, but just to his people and incredibly brave.
Vlad III ruled in Romania in the 15th century. His father Vlad II, was knighted under the Order of the Dragon, which gave him a new surname: Dracul, as the old Romanian word for dragon was drac. Naturally, his son, Vlad III, was called “son of Dracul” or Drăculea in old Romanian. You get it now. But to make matters worse for Vlad III, in modern Romanian drac means “the devil, ” so Vlad III has been mistakenly believed to be called “son of the devil.”
Inside “Dracula’s Castle.” The real life Dracula didn’t live in this castle, but it still is a really cool castle. Romania.
| yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://the-past.com/feature/the-real-vlad-dracula-in-search-of-a-15th-century-warlord/ | The real Vlad Dracula: in search of a 15th-century warlord | The Past | Suggestions
opposite An early portrait of Vlad Dracula, known as Vlad the Impaler, the 15th-century prince of Wallachia whose cruel methods of punishment made him notorious across Europe. right Wallachia, as depicted in the 1493 Nuremberg Chronicle. The historical region north of the Lower Danube and south of the Carpathian mountains, it is now part of Romania.
In 1459, Pope Pius II called on Christendom to summon a new crusade against the Ottoman Turks. Mehmed II – the Sultan known as ‘the Conqueror’, who six years earlier had conquered Constantinople, capital of the Byzantine empire – had extended Ottoman rule deep into the Balkans, occupying Bosnia, Serbia, and Peloponnesian Greece, while Albania continued to resist. But despite Pius’ invitation to take up arms, every major European state made its excuses. The French King could do nothing until relations with England had improved. The German Emperor could not depend on his princes. The Polish King was fighting the Teutonic Order. The Venetians asked for too much money.
An early portrait of Vlad Dracula, known as Vlad the Impaler, the 15th-century prince of Wallachia whose cruel methods of punishment made him notorious across Europe. Image: Wikimedia Commons
Only Vlad Dracula was prepared to face the Turkish hordes. He was the voivode (military governor, or prince) of Wallachia – the historical region north of the Lower Danube and south of the Carpathian mountains, and one of four neighbouring provinces (with Transylvania, Moldavia, and Dobrudja) that make up present-day Romania. Born in 1431, Dracula was the son of Vlad Dracul, or ‘Vlad the Dragon’ – a loyal and successful knight who gained his sobriquet when he was made a member of the Order of the Dragon by his powerful patron, Sigismund of Luxembourg (the holder of several Central European crowns, including that of King of Hungary, and later also to be Holy Roman Emperor); Dracula simply means ‘son of Dracul’. As a teenager, the younger Vlad had been given as a hostage to the Turks to ensure his father’s loyalty, and during his captivity he learned much about the ways of Turkish warfare.
By the spring of 1462, Mehmed II had raised a 60,000-strong army to strike back at the defiant Wallachian prince. It included elite units known as Janissaries (from the Turkish word for ‘new soldier’) and Turkish soldiers from Asia Minor, as well as 4,000 auxiliary Wallachian horsemen, among them exiled boyars (members of the highest rank of the feudal nobility), led by Radu, Dracula’s younger brother, who had also been held hostage by the Turks and had remained loyal to them. The Sultan considered Dracula’s submissive brother an excellent candidate for the Wallachian throne. The realisation of what faced him spurred Dracula to gather an army of some 30,000 in number. His own boyars and their retainers were augmented by Wallachian and Bulgarian peasants. Men who distinguished themselves in battle were instantly promoted to officer rank. These viteji, or ‘braves’, shaped the unprofessional mass into an army.
Wallachia, as depicted in the 1493 Nuremberg Chronicle. The historical region north of the Lower Danube and south of the Carpathian mountains, it is now part of Romania. Image: Wikimedia Commons
The Turks advanced in two parts. The main force, led by the Sultan Mehmed, sailed up the Danube. A supporting land force marched from Philippopolis (now Plovdiv) in Bulgaria. They met at the port of Vidin, one of the few Danube towns not destroyed by Dracula in preparation for the campaign. As the Turks moved along the river, Dracula’s horsemen kept them shadowed. When the Ottomans prepared to disembark on the northern bank, the Wallachians burst from the forest and let fly with their bows, forcing the Turks back into their boats. A few miles further on, the Turkish army finally crossed the Danube under cover of night and numerous cannon.
‘A few of us first crossed the river and dug ourselves in trenches,’ remembered Constantin of Ostrovitza, a Serbian Janissary. ‘We then set up the cannon around us. The trenches were to protect us from their horsemen. After that, we returned to the other side to transport the rest of the Janissaries across. When all the foot-soldiers were over, we prepared to move against the army of Dracula together with all our artillery and equipment. But, as we set up the cannon, 300 Janissaries were killed by the Wallachians. The Sultan could see a battle developing across the river and was saddened that he could not join us. He feared we might all be killed. However, we defended ourselves with 120 cannon and eventually repelled the Wallachian army. Then the Sultan sent over more men called azapi and Dracula gave up trying to prevent the crossing and withdrew. After crossing the Danube himself, the Sultan gave us 30,000 ducats to divide among us.’
The triumphal entry of Sultan Mehmed II into Constantinople, the capital of the Byzantine empire, in 1453, as depicted four centuries later by the Italian painter Fausto Zonaro. Image: Alamy
With a far smaller army, Dracula realised the impossibility of confronting the Turks in open combat, and decided instead on a guerrilla war with a scorched-earth withdrawal. Crops were burned, wells poisoned, livestock and peasants absorbed within the army. The Turks were slowed down by the lack of food and the intense summer heat. It was so hot, the Turks were said to cook shish kebab on the sun-heated rings of their mail armour. At night, the Sultan insisted on surrounding his army with earthworks. The Janissary Constantin recalled the frequent raids made by Dracula’s warriors: ‘With a few horsemen, often at night, using hidden paths, Dracula would come out of the forest and destroy Turks too far from their camp.’
The psychological strain of the guerrilla warfare began to tell. ‘A terrible fear crept into our souls,’ continued the Janissary. ‘Even though Dracula’s army was small, we were constantly on guard. Every night we used to bury ourselves in our trenches. And yet we still never felt safe.’ In the Carpathian mountains, overlooking his capital at Tîrgovis‚te, Dracula planned his most famous night-raid. He gathered several thousand of his finest horsemen. Captured Turkish warriors were subjected to hideous torture, and precise information extracted from them. Dracula wanted to capture the Sultan.
Prince of darkness
At nightfall, Dracula’s forces assembled in the dim forest. Through ferns and brush-wood, they trod silently. Turkish guards were strangled. Suddenly, all hell broke loose. Swinging sabres, yelping like wolves, shooting bows, the Wallachian horsemen descended on the Turkish camp. Slashing through tents and warriors slumped by fires, Dracula’s forces were everywhere. They searched for the Sultan’s tent. A particularly grand structure caught their attention, and they tore down the rich material. Cutting down its defenders, two viziers were slaughtered, but they were not the Sultan.
Dracula’s surprise attack on the invading Ottoman forces, led by Mehmed II, on the night of 17 June 1462, as depicted by the 19th-century Romanian artist Theodor Aman.
While the majority of Turks panicked, Mehmed’s Janissaries picked up their arms and assembled around their master’s tent. If only Dracula’s other commander joined the attack, then the loyal but small force could be overcome, but the boyar had lost his nerve. The Janissaries raised their bows and handguns. The majority of the Wallachians were content to massacre the more vulnerable Turks and load themselves with loot, before disappearing back into the forest. Dracula was furious. The Sultan had been within his grasp. Mehmed survived the night of slaughter, but he had lost several thousand of his men in a traumatic combat. It was the nearest the two forces would come to a major battle throughout the campaign.
Shaken but undeterred, the Turks advanced on Tîrgovis‚te. Just outside the city, Mehmed came across a mile-long gorge. It was filled with the most terrible of sights. More than 20,000 contorted, rotting bodies, many of them Turkish, were perched on a forest of stakes – impaled on the orders of Dracula. The Sultan was revolted by the scale of the horror. Dracula had finally pierced the Sultan’s brutal mind with his terror. ‘Overcome by disgust,’ wrote the Byzantine chronicler Chalcondylas, ‘the Sultan admitted he could not win this land from a man who does such things.’
The main Turkish army was ordered to withdraw eastwards. The night attacks and the spread of disease among his soldiers were probably the main reasons for Mehmed’s reluctance to assault Tîrgovis‚te, but Dracula’s terrorism should not be underestimated. Throughout Christendom, the Turkish withdrawal was received as Dracula’s victory.
A German print of c.1499 records Dracula dining among the dead and dying bodies of Saxon merchants captured in a raid. No execution was too revolting for Dracula to witness with pleasure. Image: Alamy
Before leaving Wallachia, however, Mehmed gave Radu permission to seize Dracula’s crown. He left a small force of Turkish warriors under Radu’s command. By this stage, Dracula was exhausted. His guerrilla warfare had damaged his own people as much as the Turks. Many of his loyal boyars were disappointed that Dracula had not achieved an outright victory and finished the Turkish menace completely. Radu realised that most Wallachians were desperate for a return to peace, and talked with the leading nobles. They were happy to become a tribute-paying ally of the Turks in return for an end to hostilities. Radu built on Dracula’s resistance of Mehmed to gain greater independence for his country, but chose reconciliation to secure a rapid peace. The boyars proclaimed that a ‘victory can sometimes be more harmful to the victorious than the defeated’.
‘The Impaler’: how vlad dracula earned his gruesome nickname
Dracula made Tîrgovis‚te in central Wallachia his capital. Within his palace, he plotted his revenge against the Wallachian nobility – the boyars – whom he felt had betrayed his family. On a terrible Easter Sunday in 1457, the boyars were dragged out one by one and impaled on stakes outside his palace. The political reasoning was revealed in a question he asked the noblemen before execution: ‘How many princes have ruled Wallachia in your lifetime?’ None were so young that they had not known at least seven. At this Dracula grew angry. ‘It is because of your intrigues and feuds that the principality is weak.’ Dracula replaced the massacred boyars with a new nobility.
Despite his control over the nobility, Dracula never felt completely secure. Wallachian subjects were impaled for the most trivial reasons, earning him the Romanian nickname T‚epes‚ — ‘the Impaler’. A German print of 1499 records Dracula dining among the dead and dying bodies of Saxon merchants captured in a raid. No execution was too revolting for Dracula to witness with pleasure.
The German settlers of Transylvania, and especially the Saxon merchants of Bras‚ov, wielded great economic power. Whenever they refused Dracula’s one-sided treaties, however, the Wallachian brutally destroyed their communities. They never forgave Dracula and relentlessly conspired against him, spreading their illustrated printed accounts of his atrocities to Western Europe. At a time of many bloodthirsty warlords, even his contemporaries considered him excessively violent.
Dracula’s castle
By the end of the year, Radu had been recognised as prince of Wallachia by most boyars and the new King of Hungary, Matthias I. The Turks were happy. Rejected by his people and with few resources, Dracula escaped northwards to the mountains of Transylvania. There, he licked his wounds in the fortress of Arges‚. Perched among the craggy Carpathian range, this was Dracula’s Castle. According to local folklore, Radu pursued his brother along the valley of the Arges‚ river. He set Turkish cannon on a hill opposite Dracula’s Castle and began to pound it. A final assault was prepared for the next day.
During the night, it is said, a slave who was a distant relative of Dracula crept out of the Turkish camp to warn the former prince. He attached a message to an arrow and shot it through a window of the castle. Informed of their fate, Dracula’s wife declared she would rather have her ‘body eaten by the fish than become a Turkish slave’, and threw herself from the battlements, plummeting into the river below. Dracula decided on a less self-destructive escape. Slipping out of the castle, he climbed the rocky slopes, and rode for the mountain city of Bras‚ov, where Matthias I had made his headquarters during this crisis. But the arrival of the ragged, exhausted, desperate Wallachian was nothing but an embarrassment to the King. Having already recognised Radu as the new prince, the King had Dracula escorted to a prison in Buda, Hungary’s historic capital.
The understandable fear of visiting Ottoman envoys on being presented to Vlad Dracula is apparent in another painting by the Romanian artist Theodor Aman. Image: Wikimedia Commons
Dracula remained in prison for 12 years. One chronicler relates that even in his cell he inflicted pain on others, by catching mice and impaling them. But this is propaganda. In reality, Dracula resided at the Hungarian court under house arrest. King Matthias considered it useful to have a claimant to the Wallachian crown among his court. Besides, Dracula’s brutal talents might one day be needed in another crusade against the Turks.
Dracula remarried – this time into the Hungarian royal family. He also renounced Greek Orthodoxy to become a Roman Catholic. In Wallachia, this conversion was considered a heresy, and all such heretics were said to become vampires after death. Catholicism eased Dracula’s path to freedom, however, and he was given the rank of captain in the Hungarian army. King Matthias would have Dracula presented to visiting Turkish envoys to assure them that he could be unleashed at any time. The Turks still feared him.
Return to power
By 1475, Stephen III, Prince of Moldavia – commonly known as Stephen the Great – was keen for an alliance with Hungary. He saw little difference between the Turks and the Wallachians, and wished to secure his western and southern borders against them. He proclaimed a crusade, and invited the King of Hungary to join him. King Matthias was happy to receive funds for this campaign from the Pope, but created more noise than action. By himself, Stephen moved against Radu and deposed him – but Radu fought back with Turkish help.
To protect the Transylvanian border against Wallachian raids, Dracula was placed in command of frontier forces. Once in the saddle, he resumed his war of terror. A papal envoy reported that Dracula cut the Turks to pieces and impaled the bits on separate stakes. At the Battle of Vaslui, Stephen, possibly with Dracula in his ranks, won a great victory against a large army of Wallachians and Turks. The triumph was followed by a formal alliance between Stephen and Matthias. The following year, the Hungarian King declared Dracula his candidate for the throne of Wallachia.
In the autumn of 1476, an army of 25,000 Hungarians, Transylvanians, Wallachians, and Serbs assembled in southern Transylvania. Ultimate command lay with Stephen Bathory, a loyal retainer to King Matthias, but the object was to place Dracula on the throne of Wallachia. At the same time, a force of 15,000 Moldavians prepared to invade eastern Wallachia under Stephen III. In November, Dracula descended from the Carpathians and besieged Tîrgovis‚te.
Before this massive army, the Wallachian citizens could do little. The capital fell, and the army moved southwards. With the capture of Bucharest, Dracula became prince of Wallachia again. The boyars were seemingly behind him, but the apparent submission of Wallachia to the old tyrant was just that. Within a couple of months, a mutilated, headless body was discovered in marshes near to the monastery of Snagov. The corpse was that of Dracula. The boyars could not forget the horror of his reign. With a small bodyguard of Moldavians, Dracula had been surprised in a skirmish outside Bucharest. Whether it was Wallachians or Turks who delivered the final blows is unknown. Indeed, the exact circumstances of the assassination remain a mystery. What is certain is that Dracula’s head was cut off and sent to the Sultan at Constantinople. The Turks rejoiced.
The death of Vlad Dracula did little to improve the state of Wallachia. It most certainly weakened its anti-Ottoman stance. Princes came and went, and Wallachia depended increasingly on the energetic Stephen III to preserve the Danube frontier against the Turks. In the next century, the battle was lost – and the Turks surged across the river to Hungary.
Lasting influence
To the Romanians, Vlad Dracula has remained a national hero, a staunch defender of Christianity against the Turks. In Western Europe, however, his image has undergone a devilish transformation, from triumphant crusader to bloodthirsty vampire. The latter is a creation of the 19th-century imagination, consolidated in books and films (see box left), but by the late 15th century, Western writers had already forgotten Dracula’s crusading triumphs and repeated only horrific accounts of his cruelty. German woodcut pamphlets were the principal agents of this image, showing Dracula dining among impaled victims. The Saxon merchants of Transylvania and their German neighbours never forgave Dracula for his raids and crimes against their people. Through their history, they forever damned Dracula as the cruellest of medieval warlords. •
Fact versus fiction
‘Within, stood a tall old man, clean- shaven save for a long white moustache, and clad in black from head to foot, without a single speck of colour about him anywhere.’ Thus it was that Count Dracula entered the modern imagination through Bram Stoker’s famous novel, published in 1897.
In Dracula, the vampire count is portrayed as a Hungarian gentleman with a great interest in England, and this vision of him as a cultured and highly sophisticated man of the world was reinforced by a number of hugely successful movies during the 20th century. From Bela Lugosi in the 1930s to Christopher Lee in the 1950s and ’60s, a succession of actors wore the black evening wear of a 19th-century gentleman in their performances. As explained on these pages, however, the real Dracula was a 15th-century Romanian warlord with a story just as chilling as that of his distant black-cloaked descendant.
Stoker himself was aware of his character’s military heritage, and gives the fictional Dracula a passionate speech in which he boasts of his martial background. ‘What devil or what witch was ever so great as Attila, whose blood is in these veins?’ he proclaims, holding up his arms. ‘Is it a wonder that we were a conquering race; that we were proud; that when the Magyar, the Lombard, the Avar, the Bulgar, or the Turk poured his thousands on our frontiers, we drove them back?’ It is here perhaps that Stoker comes closest to the spirit, at least, of Vlad Dracula.
Tim Newark is the author of numerous books about military history, and was the editor of Military Illustrated magazine for 17 years. | The latter is a creation of the 19th-century imagination, consolidated in books and films (see box left), but by the late 15th century, Western writers had already forgotten Dracula’s crusading triumphs and repeated only horrific accounts of his cruelty. German woodcut pamphlets were the principal agents of this image, showing Dracula dining among impaled victims. The Saxon merchants of Transylvania and their German neighbours never forgave Dracula for his raids and crimes against their people. Through their history, they forever damned Dracula as the cruellest of medieval warlords. •
Fact versus fiction
‘Within, stood a tall old man, clean- shaven save for a long white moustache, and clad in black from head to foot, without a single speck of colour about him anywhere.’ Thus it was that Count Dracula entered the modern imagination through Bram Stoker’s famous novel, published in 1897.
In Dracula, the vampire count is portrayed as a Hungarian gentleman with a great interest in England, and this vision of him as a cultured and highly sophisticated man of the world was reinforced by a number of hugely successful movies during the 20th century. From Bela Lugosi in the 1930s to Christopher Lee in the 1950s and ’60s, a succession of actors wore the black evening wear of a 19th-century gentleman in their performances. As explained on these pages, however, the real Dracula was a 15th-century Romanian warlord with a story just as chilling as that of his distant black-cloaked descendant.
Stoker himself was aware of his character’s military heritage, and gives the fictional Dracula a passionate speech in which he boasts of his martial background. ‘What devil or what witch was ever so great as Attila, whose blood is in these veins?’ he proclaims, holding up his arms. | yes |
Folklore | Was there ever a real life Dracula? | yes_statement | there was a "real" "life" dracula.. dracula was a "real" historical figure. | https://www.historyskills.com/classroom/year-8/vlad-the-impaler/ | Vlad the Impaler: the real-life monster who inspired Bram Stoker's ... | As the sun sets over the Carpathian Mountains, a figure emerges from the shadows, his eyes gleaming with an otherworldly hunger.
He is Dracula, the most famous vampire in literature, but few know that his character was inspired by a real-life historical figure, Vlad the Impaler.
Vlad III, also known as Vlad the Impaler, was a ruler of Wallachia, a region in present-day Romania, in the 15th century.
Although he ruled for a relatively short time, his reputation as a cruel and ruthless leader has lasted for centuries, earning him a place in history as one of the
most notorious figures of the Middle Ages.
Early life
Vlad III was born in 1431 in Transylvania, the son of Vlad II, also known as Vlad Dracul, meaning "the dragon."
Vlad III was later known as Dracula, which means "son of the dragon" in Romanian.
The family belonged to the House of Drăculești, a noble family with ties to the Wallachian ruling class.
Vlad III spent much of his early life as a hostage of the Ottoman Empire, where he learned the art of warfare and developed a fierce reputation as a fighter.
When he returned to Wallachia in 1448, he found that his father had been overthrown by his political enemies, who had allied themselves with the Ottoman
Empire.
Vlad III and his younger brother Radu were imprisoned by the Ottomans, but Vlad was released in 1456 after promising to pay tribute to the Ottoman Empire.
With the help of Hungarian and Wallachian forces, Vlad III seized the throne in 1456 and began his reign as the ruler of Wallachia. He ruled for a total of three
times, in 1448, 1456-1462, and 1476.
Greatest crimes
Vlad III was known for his brutal tactics, which included impaling his enemies on stakes, a method that would earn him the nickname "the Impaler."
His preferred method of execution was impalement, in which a person was skewered through the rectum and then hoisted up on a stake.
This method of execution was slow and agonizing, and it was intended to strike fear into the hearts of his enemies.
Vlad III was not only brutal towards his enemies but also towards his own people.
He was known for his strict laws and harsh punishments, which included impalement for even minor offenses. His reign was characterized by a reign of terror that
lasted for many years, and his cruelty was legendary.
Despite his reputation for cruelty, Vlad III was also seen as a hero by many in Wallachia, who saw him as a defender of their land against the invading Ottoman
Empire.
He was known for his bravery and his determination to protect his people, even if it meant using brutal tactics.
Inspiration for Dracula
Although Vlad III was a real historical figure, his reputation as a bloodthirsty vampire did not come until much later. In 1897, Bram Stoker published his novel,
Dracula, which was based on the legend of Vlad the Impaler.
Stoker was inspired by the historical figure and used elements of his life and reign in his novel. However, he also added supernatural elements to the story,
including the character of Dracula as a vampire who feeds on the blood of his victims.
The character of Dracula has since become one of the most iconic and recognizable figures in popular culture, and the novel has inspired countless movies, TV shows,
and other works of fiction.
There are several elements of the Dracula character that are based on Vlad the Impaler, although Bram Stoker also added fictional elements to the character.
One of the most obvious connections is the name "Dracula," which was derived from Vlad III's patronymic name, "Dracul." In Romanian, "Dracul" means "the dragon,"
and Vlad III was known as "Dracula," meaning "son of the dragon." Stoker likely chose the name for its associations with evil and danger.
Another connection between the two is their shared reputation for brutality. Vlad III was known for his preferred method of execution, impalement, while Dracula is
portrayed as a vampire who feeds on the blood of his victims. Both characters are associated with death and violence.
Stoker also drew inspiration from the historical context of Vlad III's reign, particularly his role as a defender of Wallachia against the invading Ottoman Empire.
In Dracula, the eponymous character is portrayed as a foreign invader who threatens the safety of England.
Despite these connections, it's important to note that Stoker also added fictional elements to the character of Dracula, such as his ability to transform into a bat
and his vulnerability to sunlight and garlic.
These elements were not based on Vlad III or on any other historical figure, but were instead created by Stoker to add to the character's mystique and to make him a
more formidable opponent.
About | Privacy Policy | Cookie Policy With the exception of links to external sites, some historical sources and extracts from specific publications, all content on this website is copyrighted by History Skills. This content may not be
copied, republished or redistributed without written permission from the website creator. Please use the Contact page to obtain relevant permission. | However, he also added supernatural elements to the story,
including the character of Dracula as a vampire who feeds on the blood of his victims.
The character of Dracula has since become one of the most iconic and recognizable figures in popular culture, and the novel has inspired countless movies, TV shows,
and other works of fiction.
There are several elements of the Dracula character that are based on Vlad the Impaler, although Bram Stoker also added fictional elements to the character.
One of the most obvious connections is the name "Dracula," which was derived from Vlad III's patronymic name, "Dracul." In Romanian, "Dracul" means "the dragon,"
and Vlad III was known as "Dracula," meaning "son of the dragon." Stoker likely chose the name for its associations with evil and danger.
Another connection between the two is their shared reputation for brutality. Vlad III was known for his preferred method of execution, impalement, while Dracula is
portrayed as a vampire who feeds on the blood of his victims. Both characters are associated with death and violence.
Stoker also drew inspiration from the historical context of Vlad III's reign, particularly his role as a defender of Wallachia against the invading Ottoman Empire.
In Dracula, the eponymous character is portrayed as a foreign invader who threatens the safety of England.
Despite these connections, it's important to note that Stoker also added fictional elements to the character of Dracula, such as his ability to transform into a bat
and his vulnerability to sunlight and garlic.
These elements were not based on Vlad III or on any other historical figure, but were instead created by Stoker to add to the character's mystique and to make him a
more formidable opponent.
About | Privacy Policy | Cookie Policy With the exception of links to external sites, some historical sources and extracts from specific publications, all content on this website is copyrighted by History Skills. This content may not be
copied, republished or redistributed without written permission from the website creator. | no |
Folklore | Was there ever a real life Dracula? | no_statement | there was never a "real" "life" dracula.. dracula is purely a fictional character. | https://www.goodreads.com/topic/show/1396111-hundred-year-old-horror-in-2023 | Horror Aficionados - Horrorpedia: Hundred Year Old Horror in 2023 ... | So upon looking up an old book of Horror it got me to thinking. As I looked at the book on my screen I noticed it's almost one hundred years old and has a solid 4.00 star rating. However..this book was Dracula, a classic. Which got me to my idea and investigation into seeing how other old classic horror novels fair to us here on Goodreads. I looked further into Bram Stoker and also looked into two other famous horror authors, Edgar Allan Poe and Washington Irving. Here was my analysis and average on how their books ranked.
I based my averages on where books placed in between numbers of lowest to highest so basically a median. Stoker of course is known mainly for Dracula so I was very curious to see how his other works ranked. Surprisingly they were quite all over the place. As you can tell from the gap of 1.00 star. Basically Stokers ranking to me is purely based on Dracula and for the fact that some try to look into his other work and it just doesn't measure up.
Poe's rankings were impressive. Mostly all his works are in the 4.00 star ratings and clearly the ole classic horror author/poet hasn't lost a step after all these years. Yes, unlike Stoker Poe was known for many works so that could be a factor but given how some people rate books on this site it is kind of a surprise to me that all his books are pretty much in the 4 star ratings(Thats some good work Mr.Poe)
Irving of course is known for Sleepy Hollow and Rip Van Winkle. Finding another work that wasn't one of these was hard but their are a few. I rated them based on one to two copies of each since he didn't have many others(I didn't want to dig). Again his rank rather high in the 3 star range but I would consider that a bit above average.
Why did I do this? What does any of this mean? Well for one it tells me people still appreciate the classics and origins of where horror came from. It shows me that after all these years these authors who helped pave the way for us and our favorites of today are still just as popular today as they were then, although maybe even more popular today since Stoker wasn't popular until later after his death. So the next time you want a book to read and your not sure of what that may be, pick up one of these gentlemens books, keep the classics alive and let the horror thrive...
Those 3 authors were my introduction to horror literature. I can remember my mother reading "Legend of Sleepy Hollow" to me when I was too young to read for myself. The book had those lovely, creepy Rackham illustrations, icing on the cake.
I got a book of Poe stories when I was in elementary school; it was a good reading challenge to digest Poe's prose, but at that age a lot of the symbolism escaped me. I read it all anyway, and wished for more.
In junior high I started to discover some of the YA horror books, and read "Dracula" right along with them. I also discovered "Turn of the Screw" right around that time. Then, in the winter of 7th grade, a friend with more adventurous taste in books lent me a copy of "Carrie" and I was off and running into a whole genre of literature.
I wonder if my tastes in literature would have evolved differently if I had started out with "Carrie" instead of being grounded in the classics?
I did another investigation and looked into two more authors. Ambrose Bierce, whose a bit underrated in my opinion and Mary Shelley who outside of Frankenstein I was curious to see how her other works faired. None of Bierces works stand out to me but that doesn't mean he isn't good he could just mean as I said he is underrated. Shelley is widely known for one work so I was very curious to see how many people on here read her other works and how they matched up against her greatest work.
Bierce-3.78 star rating Shelley-3.46 star rating
Bierces is quite impressive. He's in the average 3 range but its a high 3 range meaning his work could be considered a little above average. Given these results I would say he is still underrated but clearly taken into consideration of peoples reads and appreciated.
Shelleys ranking is where I thought it would be. A bit below 3.50 which Im going with as average. Frankenstein wasn't the highest ranked of her books which was a surprise but out of six books only that and one other were over 3.50. This says that while her other work is being read it clearly isn't as well known and never pans out to be as good as her classic. At least according to GR readers.
These are two known people however given their fame compared to the others may mean for some reason as to why they are ranked as such. Shelley I thought would rank as Stoker based off one work but Stoker seems to have a bit more known works. This tells me that there are still people who read work thats hundreds of years old and it still fairs to be well liked even still today.
oh! I love Ambrose Bierce! I listen to a lot of podcast & audiobooks. there is A LOT of his stories in audio form on iTunes for free. my 1st exposure to him was in high school. we read "an occurrence at owl creek bridge." man, did that ever affect me!(I was 16 or 17.)then we watched the twilight zone episode of it. I really enjoy his stories, though. I LOVE the dream sequence in the deah of Halpin Frasier. i liked the secret of macarger's gulch a lot, too. i like most of his stories.
He's one of those names that I hear and it rings a bell. I didn't even know who he was until a year ago. Always interesting to come across an old author and read up about them and then read some of their work.
I decided to take a deeper look into Edgar Allan Poe and what made him so dark and sinister but so original and famous. There is no doubt that he is one of the greatest writers of all time, one of the greatest poets of all time and one of the greatest horror authors/poets of all time. I think it goes without saying that clearly the man had some serious talent. The better question is at the time he was alive how many people exactly did what he did and wrote? Not even to ask if they wrote as good as him or better(not even a question) but simply did anyone write like him around his time period? I'm guessing yes people did write be he was at a far higher level then anyone of his time.
Now aside from clear talent at writing exceptional poems and stories there also comes inspiration. Now anyone can come up with a good concept for horror but Poe was pretty damn dark if I say so myself. He had a knack for it. For describing things in such detail you almost asked yourself how someone could be so wicked. I don't believe this to be just because he was creative and skilled at thinking and writing but because personally he had a lot of real life demons in his life. "Demon" not in the sense of horror but whatever bad and dark things plagued him throughout his life. It's known that he lost his wife so anything he wrote after that was probably quite vicious. However reading about his life you tend to pick him moments that may be a reason as to why his writing was so dark but, it goes without saying that in order to write and be in such a zone of horror one must have had a horror filled live themselves as well.
I think combining his dark personal demons along with his creative knack of writing with primitive thinking made for his legendary stories and poems. You could also make a case for Hemingway, who later in life dealt with personal issues in fact the man took his own life so clearly he too had demons. This however talks about Poe in means to horror and what he gave us and what he's left us. Has anyone matched him when it comes to story telling at such a level? I suppose that is up for debate and depends on who you ask. I think when it comes to Dark Poetry clearly he's got us all beat. While I myself write dark poetry I know that no matter how good I may think it is, he holds a mere matchstick of fire to Poe's eternal burning candle.
Finally, What made Poe so great is that aside from simple rhyme he could carry on and take a reader right into the story simply by the horrific conditions and descriptions. For any writer it is the idea to write in such technique that when a reader takes it in they are brought there. With Poe or any horror for that matter takes a reader unwillingly into the dark unknown. Poe to me during his time must have disturbed a lot of people. He seemed like he kept to himself aside from being married. Despite what movies or books may say you got to figure that a man as wickedly twisted as he was he must have had some serious quiet and lone time. However he was, Poe was an innovator. A man beyond his time. I'm sure if he were alive today he would still be writing away and not only would we take to his writing but we would be buying it in stores and eating it right up.
Anytime someone asks for horror, I am johnny-on-the-spot with such names. Same thing with this sudden lunatic craze for dystopics. I rattle off half-a-dozen forgotten dystopics anytime someone asks.
A particular horror fave of mine is Henry Hope Hodgson. The guy is just out there. About 1/3 into any of his books its just a great tale of terror, whatever it is..but then the dude suddenly drops acid and goes spinning off into outer space on a raft of words. Its like the prose equivalent of Astronomy Domine or Interstellar Overdrive. A torrent. A cataract.
By the way, you can obtain a superb overview of classic horror from 100+ years ago by none other than HP Lovecraft. He wrote an exhaustive critical summary and its available as a free download on the web.
We all know about Dracula. However..to which are we referring, Vlad Dracula or the man Bram Stoker created? Yes Bram Stoker took the concept of Dracula from Vlad the Impaler but how much of that is real and how much is false? He took a real man and made him into a vampire. A blood thirsty evil madman who sucks the blood of his victims and is captivating and alluring among other things. How many of these things did Vlad himself actually do? None. Vlad the Impaler was a man who was a general who ruled for his country of Wallachia. He impaled his victims with huge stakes. Thousands upon thousands at a time upon a site in battle or in town. This is the only actual gruesome thing that we know of about Vlad Tepes. There are rumors that he drank the blood of victims, did odd things at night but again these are simply rumors.
While we are no stranger to deciphering fact from fiction sometimes it's hard to tell what is real and what is not. Dracula needed to have come from somewhere, need a basis and so Vlad was that very basis. Dracula is also based on sixteenth century countess Elizabeth Bathory who bathed in the blood of her virgin servants to remain young and youthful. She too used torture tactics to achieve her own person bloody nirvana. It was said that she too drank blood, this is where I believe Stoker got the whole idea of drinking blood. I believe he took it from Bathory not Vlad since there are more valid evidence that claims Bathory drank blood rather than Vlad. So basically Stoker took certain elements and actual sick rituals and techniques from both people and put them into and created Dracula.
When someone asks is Dracula real? Ehh..he was yes in a sense based off the name and certain ways but in the sense of the very Vampire that we read about not so much. I could see here and bring the whole concept of Vampirism into this but I have't quite done that much research and that may bring this into a whole other discussion. Dracula's origins are that he is from Transylvania. However Vlad Tepes was from Romania which at the time was Transylvania. Note this thought though, Stoker seemed to take the blood shedding and out for blood part from Vlad yet used his very technique against him. Vlad was known as the Impaler because he impaled his victims on long wooden stakes. Yet it's these very wooden stakes to which is the reason of how to kill the fictional Dracula and all vampires. Also the holy cross. We all know that crosses are said to scare off and burn Dracula and vampires if they come too close. Again though Vlad himself was very religious. So much so that he was denounced from the Catholic church and fought for his religious rights. So it's kind of ironic that a cross is one of the weaknesses that Dracula and vampires are prone to. Stoker clearly did his research on the Prince however decided to do a switch and put his own twist on certain ideas.
Having read two books on Dracula and not counting the original by Stoker I noticed interesting things in both. In Vlad: The Last Confession the book focuses on the actual man who became the legendary fictional character. Any references to the words "Dracula", "Dracul" are because of his name which means Son of the Dragon or Devil. Seems only fitting that Stoker took quite the perfect name for his myth. In the book Dracula's Apprentice, Vlad Dracula is mentioned but you don't know whether it's the real Vlad the Impaler or the fictional character. I took it as in the middle being a bit of both which means Dracula has that realistic part to him but also the fictional part to him.
In very simple terms, Dracula was a real man. The story Bram Stoker created has made that very man be seen in a whole new life. Vlad Tepes was a secret agent in a way. In real life he was a ruthless general and in fiction he was a Prince of Darkness, blood drinking vampire. However a person wants to see it without Vlad and without Bram Stoker we would not have vampires. Perhaps vampires were thought of before he I believe there is (again I don't want to get into another side story) but even if there was vampires it wasn't until his tale that we really took a notice to them. His simple horror book of a blood craving pale creature turned the world of writing and horror upside down(no pun intended). It's quite a thought at least to me that one man managed to change the shape of horror as we know it. It makes me wonder if another person today could creature and write about their own creature that takes off a spins a whole new horrific species and starts a phenomenon.
Ever wonder how Stephen King would have faired as an author in the 1800s? Theres no doubt that he is in the Top 10 with authors whom had written 100s of years ago but given Kings style and unique sense of writing horror how exactly would his writing have been compared to someone of that time? I'd imagine he could have either been as great and as known as Poe or he could have been deemed unfit by society and people may not have understood him. It goes without saying that any author of today may not fit literature of years past but some you would think would fit in nicely. Perhaps even do better if not at least well for themselves.
King is a perfect example because he has dominated our generation in the world of horror and writing in general. Had he been a writer back 100 years ago? I like to think he'd still be taking people in with his work because he is just that good. This is by no means homeage to King but merely a representation of such a solid figure in literature. I am sure there are many and feel free to name some that you feel who could not only write today but years ago. Kings style may have been different given the content to which he writes would differ from the past but I again think he'd make due with how the times presented themselves to him.
We all know about Dracula. However..to which are we referring, Vlad Dracula or the man Bram Stoker created? Yes Bram Stoker took the concept of Dracula from Vlad the I..."
There are quite a few documented accounts of real vampire from the East European countries concerning vampires (With the most famous being Arnold Paolo) during the 16th & 17th centuries.
One of the things I do in my books is focus on the vast variety of creatures that led to the formation of what is considered to be traditional vampires (such as the Scottish Baobban Sith, the Upierczi, the Strigoi, the Vishtitza and many others)
Well, of course King has the idiomatic voice of a mid-20th century guy. If he'd lived then, he would have sounded different. However, I think his strengths as a writer fit nicely with the 19th century: he writes strong characters in big, rambling narratives that are highly plot-driven and tend to have deeper, topical concerns as a subtext. In other words, he's got a lot in common with Dickens, who was wildly successful and popular in his own time.
Of course, King is extremely well-read and he was heavily influenced by Stoker, Robert Louis Stephenson, and Lovecraft, among others. Who knows whether he would have developed as well as a writer if he hadn't read such outstanding forebears as an impressionable youth?
Other 19th-century writers I really like are Mary Elizabeth Braddon (GREAT ghost stories with a keen sense of human nature) and Theophile Gautier. And lots and lots of others ... there's a whole group here on Good Reads for Classic Horror Lovers, check 'em out! Lots of good recommendations and discussions.
Not to stir up any controversy but thinking on a rather deep thought, The Bible could be considered a book of Hundred year old horror. I take nothing away and mean no disrespect toward it's true intent or grasp about religion but I am saying given some of the stories that can be taken from it, it could be consider in some instances small horror. The Bible is of course a text to which we refer to in a time of need but at the same time does make for a good tale for those looking for questionable non-fiction. Nevertheless I believe it could lightly be considered under horror but only in small doses.
I started to look into the ever still popular, H.P Lovecraft. I looked into seven of his works to see how he rates and fairs here on Goodreads. I looked into books ranging between the 20s and 30s. I noticed right away that his books were above 4.14 which is above average. As I ranked the seven books they rated to about 4.23 which is also above average. This puts him above Bram Stoker but below Edgar Allan Poe previous rates made on here. It just goes to show you that people really take to H.P Lovecraft and his work.
I myself did not know who Lovecraft was a year ago. I recently found out about who he was and what kind of things he writes about and noticed that even after all these years hes still very much enjoyable. In fact he's become so popular that he has his own genre, so thats saying something right there. As far as Goodreads goes I would say he's definitely one of the most read and most enjoyable on here. If the ratings dont say so then peoples comments in threads do. H.P Lovecraft is a perfect and prime example of hundred year old horror give or take that is still very popular today.
I have mentioned in some other post that I listen to a lot of podcast, which is kinda like a talk radio show but so much cooler. they have podcast for everything imaginable! one of my favorites is the H.P. Literary Podcast. they have an option to subscribe, which I do. it's 2.22$ a month &well worth it. the last 2 shows they covered: On the River by Guy de Maupassant The Dead Valley by Ralph Adams Cram O.M.G. these 2 stories were so so awesome!!! I have a gutenburg app on my ipad and i found Guy's story on it. i listen to a lot of audio books & found a reading of the dead valley on Librivox. I'm pretty sure these stories fit here. if not, I'm sorry! I'm gonna post them on Paula's thread too. they were so good!
I decided to look into poet Arthur Rimbaud and his only work which is poem, 'A Season In Hell'. The poem was published in the late 1800's and many acclaim it to be one of the best poems ever written. It tells of Rimbaud's anger toward a love that was not to be. While many suggest and claim his language is strong with poetic tools and structure many also would say he is merely a young man going on a tirade and simply using strong words to show his anger. Nevertheless, this poem has inspired many and has found it's place in history. The uses of poetic language and his drive for writing this are obvious but what is the most bizarre and interesting is that this was his last poem he ever wrote.
After this poem he quit writing to pursue other things. Perhaps he never wanted to be a poem and only wrote his famous work to express himself but clearly a wonder. I don't believe he set out to write a masterpiece but merely wrote to express his anger and broken heart. Here on Goodreads, it ranks well and reviews on it are mixed and that is to be expected. I wish there was more works to go off of but given this is his only one this is in fact his testament of writing. Whatever his agenda, A Season in Hell is a poem which draws in many readers, leaves them with questions and leaves it's mark on the mind as to what the mind of Rimauds was thinking.
If you have ever read a book that's over a hundred years old I'm sure you will come to one conclusion. A conclusion that you will share with me, the writing back then was spot on and seemingly more defined then it is today. Not to discredit any author of today; not in the slightest. It just seems to me that back a hundred years ago writers wrote with a sense of pride, passion and purpose that we ourselves could only dream of having and achieving. I do wonder of the way genres were perceived back then. I would imagine they weren't as categorized and well read as they are today. You pick up a book with such age, you dust it off and dip into it and realize it has symbolism of itself and that it's no wonder it is a classic of it's representation.
Back then, an author I am almost sure wasn't busting their behinds to get a book out. They took their time and didn't worry about rejection because they were their own vices and critics at least at first. You could also probably tell a first edition book back in the day took time and it's clear and it's purpose was simply to educate the mind and expand the imagination. Not saying a book of today doesn't but a book of today you know the author tried whereas back then they didn't and even if they did try there was not half as much a crowd of critics and places to talk of the work.
The works of 100 years now today are books we as writers strive to equal. We look and see to as inspiration. Even if we never achieve such success, at least we know it is possible. For if a book as old as that can still be relevant and popular in today's world it is only a wonder as to what a book of today will become in another one hundred years.
Justin wrote: "If you have ever read a book that's over a hundred years old I'm sure you will come to one conclusion. A conclusion that you will share with me, the writing back then was spot on and seemingly more..."
Just the technology alone....think about the authors who "penned" their works.....in longhand, with a quill, a bottle of ink and a blotter. Or even typing the words out on an old fashioned manual typewriter. They certainly had time to think about what they were writing, didn't they? Re-writing an entire page by hand or laboriously correcting typed print is so discouraging that you would want to get it perfect the first time.
Good point. Technology back then, or rather there lack of was obvious. Quill pens, god only knows what type of paper and how many wasted. However I would have to assume editing was still strong given most back then were well educated and knew of the fundamentals of literature and grammar.
Now it goes without saying that Goodreads ratings apply to Goodreads only and some take them serious while others don't. For the sake of argument and as done before I have compared the two classics to see how they fair on here. Both clearly have had many upon many a readers and both have averages well above the average rating. Both are timeless horror classics and are one of those books that if you asked ten people a good more then half would probably say they have read both if not then at least one of the two. The fact that they are both brilliant stories are not only a factor but that they have inspired so many other remakes and other types of takes on their classic original tales.
Dracula, is the essential horror novel. One that stands in a horror vault of its own. It would probably be better then Frankenstein because of the whole vampire aspect. Dracula is someone that people wish they could be, wish they could have his powers and the story is very catching. Frankenstein does not have that factor as perhaps some want to be a monster but not an abomination which is what ole Frankie happens to be. Both take nothing away from one another but each as stated have their own right to what makes them so good. Mary Shelley's tale of a doctor creating creature in his lab is so simple yet so classic now that if the concept had not been thought of years ago and could be now it makes you wonder how it would fair today if it just came out. The same can also be said for Dracula. Of course there are many upon many of new takes on each story but no matter what they don't quite measure up to the two classics from which they were inspired no matter how good they may be. I myself have read two stories of both Dracula and Frankenstein and I got to say as good as they were, knowing why they were created I could not help but think as good as they are they of course will not beat out their original origins.
I once read Dracula's Apprentice by Mike Zimmerman. It was a tale about a young man who gets bitten and is unaware to whom he has been bitten by. As good as the book was it doesn't follow Dracula but the young man and those around him. This is a clear example of someone taking the name of Dracula and inserting it and putting their own unique spin on the tale. I gave the book 4 stars and really enjoyed the book. I also once read a comic version of the movie based on the classic monster I, Frankenstein. This was definitely unique and told of Frankenstein's Monster in a different way and also pertained more to those outside him. I actually liked it a little bit better then the original way that the monster was created.
Overall, I think when they are outside one another and it needs not be said, both Dracula and Frankenstein are classics for another 100 years. Bram Stoker and Mary Shelley have given us not only a masterpiece of fiction to read but a great inspiration to which we as writers wish we can great into our own adaptations.
As we approach another year I'd like to take a moment to ask, how has work from a hundred years ago if any helped influence you? Are their works from way back in the day that has stuck with you and you either really enjoyed it or it's made quite an impact on you? Some way think that 100 year old horror is no different then horror of today and while that may partly be true it doesn't stop that fact that without the horror from all those years ago we wouldn't have the classics such as Dracula and some of the books we love today may not have been without those from all those years ago.
So take a step back, think and ask yourself, has the works of horror from all those many years ago influenced you at all? Do you read them and are they some of your favorites and most cherished books? It's just a thought and it's always nice to know we appreciate the origins to what we have come to love so much.
We have to realise that a hundred years, when weighing up horror literature, is not that long ago. Algernon Blackwood, Arthur Machen, Willian Hope Hodgson, Sir Arthur Quiller Couch et al. are still firm favourites of mine. When penning my own efforts, I still return to them to see how it's done properly.
I have gone on an old horror spree. I have loaded up a bunch of free e-books from authors like Wilkie Collins, M. R. James, William Hope Hodgson, etc. I think I am going to read Stoker's "Lair of the White Worm" first.
What I love about the horror from this era is the atmosphere and the subtlety that is lacking in modern horror. The old writers put a lot of thought into building a menacing atmosphere, creating so much unease and dread. This style of writing gives me chills and causes me to look over my shoulder as I read. I avoid a lot of modern horror because sitting around for hours with an expression of disgust on my face is going to cause some bad wrinkles.
Haha! I read very little modern horror, Holly. I too have all the authors you mention in my Kindle library - and they were either cheap as chips or free. I find that a great privilege. Modern writers just can't seen to duplicate that frisson of dread that authors from a century ago could.
There are some decent modern writers of the macabre, but they are few and far between. Most of the recommendations from Horror Aficionados are awful. Only occasionally will one grab my interest. Zombies or vampires seems to be the in thing at the moment and I'm very reluctant to wade through mountains of the crap to find odd good one.
I envy William Meikle being a genre writer; the fun you could have. I've always wanted to have a crack at it myself, but never plucked up the nerve. For my money, Phillip Jose Farmer was the best of the bunch. William's work is okay, but it's hardly up there with the masters who influence him. Having said that, I bet he's having a great time churning out the stuff, though. I'll get back to you on Greg Gifune.
Adrian wrote: "Holly wrote: "I have gone on an old horror spree. I have loaded up a bunch of free e-books from authors like Wilkie Collins, M. R. James, William Hope Hodgson, etc. I think I am going to read Stoke..."
Well, I don't know much about the old or classic authors in the horror genre , but I know for sure that bram stoker's other works were not as great as dracula. No where near dracula. Dracula in itself is a unique book. It has inspired hundreds of writers. My first horror fiction or horror literature book was Dracula. The modern horror novels are good but still a classic is a classic. I think we can say that the gothic fiction was introduced by bram stoker's Dracula. There are many modern horror novels which are really good but they are still nothing compared to Dracula. When it comes to poe , his work just cannot be described in mere words. The best of the best. I recently purchased the complete edgar allan poe book. Both of them according to me are like creators and have shaped the future horror literature. It is very seldom and sad that we only have a few writers like them who possess such greatness. And as for Irving, I am anxiously waiting to Read sleepy hollow. I always look up on the Internet , to find out more about the classic authors. From what I have learnt , Besides bram stoker and edgar allan poe , shirley jackson is also a very talented writer and a very popular one too. According to me edgar allan poe and stoker are by far the most talented writers in horror literature we will ever find. I am young and I don't have a lot of knowledge about this stuff. I hope you don't mind me entering this discussion. I hope I didn't make a fool out of myself.
Harsh wrote: "Well, I don't know much about the old or classic authors in the horror genre , but I know for sure that bram stoker's other works were not as great as dracula. No where near dracula. Dracula in its..."
I agree that Dracula is an excellent horror classic, I loved it. However, there are so many other authors in the horror genre that are also good, if not better than Stoker was. Just because Dracula is very popular, does not mean that it is the best example of the genre, back then or now.
There are all kinds of horror these days...supernatural, psychological, splatterpunk, quiet, etc.. Every horror lover seems to have a niche that they enjoy the most.
Myself? I do have a soft spot for horror stories told in the epistolary style, (like Dracula), be they old or new, doesn't matter to me.
This does not mean that I agree that Poe and Stoker are the best that horror will ever be, because I do not. There are many, many other authors that I think are as great or even greater than these.
Shirley Jackson is definitely one of them-true literary horror with lots of psychological overtones.
Consider this....for quite some time, Richard Marsh's THE BEETLE was more popular than DRACULA.
Beyond the matter of personal taste, there is a vast, colorful history to the horror genre, filled with many excellent authors that is worth exploring, with a style for every taste, and hidden gems tucked away, waiting to be discovered. | In Vlad: The Last Confession the book focuses on the actual man who became the legendary fictional character. Any references to the words "Dracula", "Dracul" are because of his name which means Son of the Dragon or Devil. Seems only fitting that Stoker took quite the perfect name for his myth. In the book Dracula's Apprentice, Vlad Dracula is mentioned but you don't know whether it's the real Vlad the Impaler or the fictional character. I took it as in the middle being a bit of both which means Dracula has that realistic part to him but also the fictional part to him.
In very simple terms, Dracula was a real man. The story Bram Stoker created has made that very man be seen in a whole new life. Vlad Tepes was a secret agent in a way. In real life he was a ruthless general and in fiction he was a Prince of Darkness, blood drinking vampire. However a person wants to see it without Vlad and without Bram Stoker we would not have vampires. Perhaps vampires were thought of before he I believe there is (again I don't want to get into another side story) but even if there was vampires it wasn't until his tale that we really took a notice to them. His simple horror book of a blood craving pale creature turned the world of writing and horror upside down(no pun intended). It's quite a thought at least to me that one man managed to change the shape of horror as we know it. It makes me wonder if another person today could creature and write about their own creature that takes off a spins a whole new horrific species and starts a phenomenon.
Ever wonder how Stephen King would have faired as an author in the 1800s? Theres no doubt that he is in the Top 10 with authors whom had written 100s of years ago but given Kings style and unique sense of writing horror how exactly would his writing have been compared to someone of that time? I'd imagine he could have either been as great and as known as Poe or he could have been deemed unfit by society and people may not have understood him. It goes without saying that any author of today may not fit literature of years past but | yes |
Folklore | Was there ever a real life Dracula? | no_statement | there was never a "real" "life" dracula.. dracula is purely a fictional character. | https://www.npr.org/2008/10/30/96282132/defining-dracula-a-century-of-vampire-evolution | Defining Dracula: A Century Of Vampire Evolution : NPR | Defining Dracula: A Century Of Vampire EvolutionDracula can't see his own reflection in the mirror because he is a reflection of the culture around him. Vampire expert Eric Nuzum explains how depictions of Transylvania's most famous son vary widely from the Victorian era to the Cold War.
Defining Dracula: A Century Of Vampire Evolution
Dracula can't see his own reflection in the mirror because he is a reflection of the culture around him. Ever since Bram Stoker penned Dracula in 1897, the vampire's image has been a work in progress.
In the 43 sequels, remakes and adaptations of Stoker's novel, Transylvania's most famous son rarely appears the same way twice. He has evolved with the society around him. His physical traits, powers and weaknesses have morphed to suit cultural and political climates from the Victorian era to the Cold War.
Read on to see how the "Son of the Devil" has changed over time:
1450: The Real-Life Dracula
Hulton Archive/Getty Images
The original, real-life Dracula was not a vampire, did not drink blood, and didn't worship the devil, either. But he did do many terrible things (i.e., murder thousands of his countrymen) that would make "actual" vampires pale in comparison.
Vlad III, Prince of Wallachia or "Vlad the Impaler" is Count Dracula's historical namesake. His chosen last name "Dracula" translates to "son of the devil," or "son of the dragon" — a reference to a religious order founded by his father (Vlad Dracul).
Despite his famed ruthlessness, it is most likely that his name — chosen randomly out of a Transylvanian history book — was all that Dracula author Bram Stoker ever knew of him.
1897: Modeled After Walt Whitman?
Edward Gooch/Getty Images
Today, Dracula often conjures up images of a sexy, mysterious, debonair aristocrat, but Bram Stoker's 1897 Count Dracula was none of those things. There are many theories about how Stoker crafted Dracula's look; some have speculated that the Irish author modeled him after his personal hero, Walt Whitman. (Stoker once confided in a fan letter that Whitman could be "father, and brother and wife to his soul.")
Stoker writes that Dracula had a thick mustache, a large nose and white hair that "grew scantily round the temples but profusely elsewhere." (See how those rumors about Whitman — pictured above — got started?) He describes the Count's general look as "one of extraordinary pallor." Dracula had sharp teeth, pointy ears, squat fingers and hair in the palms of his hands. The sexy, debonair vampire was a creation of later generations.
A lot was going on when Stoker was working on Dracula at the turn of the 19th century: Victorian ideals of repressed sexuality and subservient women's roles were going out of style; Darwinism was just taking hold; and Jack the Ripper was on a murder spree.
Stoker's villain channeled all that — and a lot more — into one super bad guy who resonated with readers for decades. Dracula gradually became the most significant work of Gothic horror literature because it was the perfect vessel for the fears and desires of the era.
1931: Dracula As European Aristocrat
Hulton Archive/Getty Images
As an evil intruder who disrupted innocent lives, Dracula personified all that was threatening, powerful, alluring and evil. In the 1920s and '30s, this translated into an Eastern aristocrat with slicked-back hair, a top coat and a medallion — a look that became the enduring standard for all vampires to come.
Hungarian actor Bela Lugosi became the quintessential Count Dracula in Tod Browning's film adaptation of Stoker's novel. Lugosi refused to wear any makeup that would obscure his face (he declined to play the original Frankenstein for the same reason), and so Lugosi's version of the Count never had fangs.
Lugosi made less than $3,000 for his work in the role, but nearly 80 years later, he is still considered the definitive Dracula.
1958: Dracula As Cold War Enemy
Silver Screen Collection/Hulton Archive/Getty
During the Cold War era, Count Dracula became superbad. His motives were unimportant — he was distilled into a vicious troublemaker with an appetite for destruction. Just as the U.S. viewed Cold War enemies as purely evil, Dracula became a character with whom it was impossible to empathize.
Christopher Lee's 1958 depiction of the Count had red eyes and huge fangs, often with some virginal gore hanging off them. Lee was a pro; he played the Count a total of six times — more than any other actor.
Lee's Count was so inherently menacing, that in one 1966 sequel, Dracula: Prince of Darkness, he had no lines at all — he just hissed at the camera throughout the film.
1979: Disco Dracula
Universal Pictures
In the 1979 remake of the original Dracula, the vampire was updated for the disco era with chiseled good looks and severely blow-dried hair. Forget politics or world views with this Count. He represented a sexual creature free of moral anchors — able to do whatever (or whomever) he pleased.
It's probably no coincidence that this manifestation of the Transylvanian bad boy debuted less than two years after Saturday Night Fever. Frank Langella looks as if he plans to do "The Hustle" with Tony Manero right after he drains the blood from a few virgins.
2004: Dracula Goes Goth
Universal Pictures
Goth, gaunt and hip, today's vampires look like roadies for the Smashing Pumpkins. They exude absolute freedom and irreverent power — and they're handsome to boot.
Aussie Richard Roxburgh played the Count in Van Helsing in 2004. Despite his Johnny Depp good looks, he transforms into a bat-like orthodontic nightmare when provoked.
In HBO's True Blood and author Stephenie Meyer's Twilight series, modern vampires disguise ugly evil below sexy allure. Today's Dracula reflects 21st century fears about people who are not what they seem. | But he did do many terrible things (i.e., murder thousands of his countrymen) that would make "actual" vampires pale in comparison.
Vlad III, Prince of Wallachia or "Vlad the Impaler" is Count Dracula's historical namesake. His chosen last name "Dracula" translates to "son of the devil," or "son of the dragon" — a reference to a religious order founded by his father (Vlad Dracul).
Despite his famed ruthlessness, it is most likely that his name — chosen randomly out of a Transylvanian history book — was all that Dracula author Bram Stoker ever knew of him.
1897: Modeled After Walt Whitman?
Edward Gooch/Getty Images
Today, Dracula often conjures up images of a sexy, mysterious, debonair aristocrat, but Bram Stoker's 1897 Count Dracula was none of those things. There are many theories about how Stoker crafted Dracula's look; some have speculated that the Irish author modeled him after his personal hero, Walt Whitman. (Stoker once confided in a fan letter that Whitman could be "father, and brother and wife to his soul.")
Stoker writes that Dracula had a thick mustache, a large nose and white hair that "grew scantily round the temples but profusely elsewhere." (See how those rumors about Whitman — pictured above — got started?) He describes the Count's general look as "one of extraordinary pallor." Dracula had sharp teeth, pointy ears, squat fingers and hair in the palms of his hands. The sexy, debonair vampire was a creation of later generations.
| yes |
Folklore | Was there ever a real life Dracula? | no_statement | there was never a "real" "life" dracula.. dracula is purely a fictional character. | https://inews.co.uk/culture/television/dracula-real-true-story-vampire-name-bram-stoker-book-bbc-series-380247 | Was Dracula real? How true stories and real-life figures inspired ... | For years, the tale of the Count has terrified many and while the character of Dracula is pure fiction, it appears that the inspiration for him is not…
Claes Bang stars as Dracula in the latest adaptation written by Steven Moffat and Mark Gatiss (Photo: BBC)
Is Dracula real?
No he is not. However, rumour has it that when Stoker was writing his novel, he named the scary character after Vlad III, Prince of Wallachia, who was better known as Vlad the Impaler.
Born in 1431, Vlad Tepes went on to adopt the name Dracula – meaning “the son of the devil” – after his father Vlad II was inducted into a semi-military and religious society that was called the Order of the Dragon. Vlad II was given the name Vlad II Dracul by the boyars of Romania who associated the dragon with the Devil.
Following the death of his father, Vlad Tepes ruled over Walachia and had a unique way of exacting revenge by impaling the heads of his enemies. Folklore suggests that the ruler even dipped his bread in the blood of his victims.
However, over the years, other historians have found that there is no link between Stoker’s Dracular and Vlad the Impaler. Instead, Stoker is thought to have come across the name Dracula when he read a book about Wallachia in 1890. According to Dracula: Sense and Nonsense by Elizabeth Miller, Stoker wrote in his notes that Dracula “in Wallachian language means DEVIL”.
Rumour has it that Bram Stoker based the character of Dracula on Vlad the Impaler
But it wasn’t all about Vlad, as Stoker is believed to have taken inspiration for his eerie novel from a source closer to home.
According to BBC News, Stoker took an interest in Scottish writer Emily Gerard’s book on Transylvanian folklore and was introduced to the idea of the Nosferatu, who drinks the blood of innocent people, when he read her literary offering, The Land Beyond the Forest.
The London Library recently discovered that The Land Beyond the Forest was one of Stoker’s main sources when he wrote his novel.
When is Dracula on TV?
The first episode of Dracula aired on New Year’s Day on BBC One at 9pm. The episodes will continue to air over two consecutive nights at the same time and on the same channel.
As well as being available to watch on the BBC iPlayer, viewers who are not in the UK will be able to watch the series via Netflix.
Who stars in Dracula?
Danish actor Claes Bang stars in the lead role of Dracula as The Crown’s John Heffernan takes on the role of Jonathan Harker.
Death Comes to Pemberley actress makes an appearance as Mother Superior alongside Dolly Wells in the role of Sister Agatha.
Morfydd Clark, who starred in His Dark Materials will be seen taking on the role of Mina and Mark Gatiss, who also co-created and co-wrote the series, will star as Frank. | For years, the tale of the Count has terrified many and while the character of Dracula is pure fiction, it appears that the inspiration for him is not…
Claes Bang stars as Dracula in the latest adaptation written by Steven Moffat and Mark Gatiss (Photo: BBC)
Is Dracula real?
No he is not. However, rumour has it that when Stoker was writing his novel, he named the scary character after Vlad III, Prince of Wallachia, who was better known as Vlad the Impaler.
Born in 1431, Vlad Tepes went on to adopt the name Dracula – meaning “the son of the devil” – after his father Vlad II was inducted into a semi-military and religious society that was called the Order of the Dragon. Vlad II was given the name Vlad II Dracul by the boyars of Romania who associated the dragon with the Devil.
Following the death of his father, Vlad Tepes ruled over Walachia and had a unique way of exacting revenge by impaling the heads of his enemies. Folklore suggests that the ruler even dipped his bread in the blood of his victims.
However, over the years, other historians have found that there is no link between Stoker’s Dracular and Vlad the Impaler. Instead, Stoker is thought to have come across the name Dracula when he read a book about Wallachia in 1890. According to Dracula: Sense and Nonsense by Elizabeth Miller, Stoker wrote in his notes that Dracula “in Wallachian language means DEVIL”.
Rumour has it that Bram Stoker based the character of Dracula on Vlad the Impaler
But it wasn’t all about Vlad, as Stoker is believed to have taken inspiration for his eerie novel from a source closer to home.
According to BBC News, Stoker took an interest in Scottish writer Emily Gerard’s book on Transylvanian folklore and was introduced to the idea of the Nosferatu, who drinks the blood of innocent people, when he read her literary offering, The Land Beyond the Forest.
| no |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://www.focusonthefamily.ca/content/adam-and-eve-really-existed-and-why-that-matters | Adam and Eve really existed, and why that matters - Focus on the ... | More from Focus
Adam and Eve really existed, and why that matters
Themes covered
What's inside this article
Adam and Eve have a long history of being scoffed at by skeptics. Almost from the beginning, opponents of Christianity have dismissed the opening chapters of Genesis – including the story of our first parents – as pure myth, on par with other creation myths from the Ancient Near East.
Over the past century or so, with the advent of Darwinian naturalism, these assertions have grown more insistent, buttressed with bold claims that “science has proven” Adam and Eve could not have existed.
In recent times, even believers in growing numbers have come to question the historicity of the first human couple. They’ll insist Adam and Eve weren’t real people, just metaphorical stand-ins for humanity. At most, they’ll allow that perhaps God may have plucked a pair of hominids from the evolutionary stream, named them Adam and Eve, and infused them with souls and with his image.
These efforts can stem from an earnest desire to resolve an alleged conflict between science and Scripture. Or else, they may be an attempt to avoid looking ignorant in the eyes of secular culture. Whatever their motive, they wind up undermining the actual pursuit of science, to say nothing of the Gospel narrative of Scripture.
The genre of Genesis
When approaching a text, especially one as significant as the creation account, it’s vital to get the genre right. Genesis is not a modern textbook of history or science. It was written in elevated, stylized language, the first chapter in particular built around an artful pattern of repetition. However, that first chapter isn’t Hebrew poetry per se, any more than the rest of the book is. There’s none of the two-line parallelism that’s a defining feature of Hebrew verse found in Psalms and elsewhere in Scripture. Rather the text is in the form of historical narrative, composed to recount actual occurrences, even though its style is in keeping with the literary conventions of its time.
As apologist Alisa Childers points out, “Although the story is told in a poetic way, the Genesis account mainly exhibits the characteristics of narrative prose, which describes a series of events.”
To be sure, the proper name “Adam” is also a general term for humankind, as “Eve” is for “life” and “Eden” is for “pleasure” or “delight.” Nevertheless, the text presents Adam and Eve as actual people in a specific place and time. And they do actual people things like marrying, having children, making choices, tending a garden, giving names to animals, and conversing with each other and with God.
Moreover, Adam’s genealogical record lists his exact age when his son Seth was born, the fact that he had other sons and daughters, and the exact age when he died. In fact, the entire book of Genesis is built around a series of genealogies that connect Adam to Noah, and then to Abraham, Isaac and Jacob, and ultimately to Moses and the people of Israel. Moses, who wrote the book, treated Adam and Eve as real historical figures, no less than anyone else in that family tree.
Blurring the Imago Dei
The first chapter of Genesis states that God created humanity, male and female, in his own image. The second chapter provides more detail, describing how God formed Adam directly from the dust of the earth and breathed life into him. God then created Eve, also directly, from one of Adam’s ribs.
It’s difficult to square an honest reading of this narrative with the idea that Adam and Eve were metaphorical, or else a pair of hominids elevated to human status. The text says that when God breathed life into Adam, the man became a nephesh chaya, Hebrew for living creature. That same expression is used throughout the account to describe other living creatures, like birds and animals. So, if Adam were a divinely mutated hominid, he would have already been a nephesh chaya before God ever breathed life into him.
Beyond that, the story of Adam and Eve is essential to a proper understanding of the nature of humanity. As God’s unique image bearers, created by him for that express purpose, human beings possess a dignity and value distinct from the rest of creation. And because all people are descended from that first couple, every individual, male and female, has an equal share of that value and dignity.
If Adam and Eve were pre-existing hominids transformed by God, then humanity’s unique reflection of the Imago Dei is blurred at best and may not even be present to the same degree – or at all – in every individual. And if our first parents never existed, then any objective basis for inherent – and inherited – human worth doesn’t exist either.
Scripture after Genesis
Adam and Eve are mentioned only sporadically in the rest of Scripture after Genesis. But when they are, they’re always presented as actual historical figures. The first book of Chronicles opens with a genealogy of Israel, starting with Adam. In similar fashion, the Gospel of Luke traces the ancestry of Jesus all the way back to Adam. In the book of Acts, Paul tells the skeptical Athenians that God made all human nations from one original man. And when writing to Timothy, the Apostle again refers to Adam and Eve as historical people, as does Jude in his short letter when he quotes Enoch, a seventh generation descendent of Adam.
Jesus himself, while teaching about marriage and divorce in the Gospels of Matthew and Mark, alludes to Adam and Eve as real people. Later, as recorded in Matthew and Luke, the Lord also speaks about the literal murder of Abel, Adam and Eve’s second son. And along the same lines, the writer to the Hebrews describes Abel’s sacrifice as an actual event, and places Adam’s murdered son at the head of his list of heroes of the faith.
It would be hard to deny that the authors of Scripture – and the Lord himself – read the Genesis account as historical narrative and viewed Adam and Eve as historical people. But that hasn’t stopped critics from trying. They’ll argue that these authors and their original readers knew they were talking about ancient myths to convey spiritual truth. Or else they’ll claim that the apostles and evangelists – and even Jesus – were simply wrong.
Such claims, however, don’t bear up under serious scrutiny. Reading these texts honestly and in context makes it clear that the authors intended their audience to know they were talking about real people and real events. In each case, the spiritual truth they were trying to convey falls apart unless rooted in historical fact. It’s hard to imagine a rigorous thinker like Paul or a careful historian like Luke getting their facts wrong and using myths to make their case. It’s harder still – in fact impossible – to think of Jesus, the divine author of all truth and reality, making the same mistake.
Dire Gospel implications
From a Gospel perspective, the most significant discussion about Adam and Eve outside of Genesis is found in Paul’s letters to the Roman and Corinthian churches.
In the fifth chapter of Romans, Paul presents Adam and Jesus as the two representative heads of humanity. He spells out in detail how sin and death entered the world through Adam and spread by inheritance to the entire human race. But through Jesus, who took on human nature, Adam’s fallen descendants can receive grace, righteousness and eternal life.
The Apostle reiterates and distills this core Gospel truth to the church at Corinth via a series of vivid contrasts: “For as by a man came death, by a man has come also the resurrection of the dead. For as in Adam all die, so also in Christ shall all be made alive. . . . Thus it is written, ‘The first man Adam became a living being’; the last Adam became a life-giving spirit. . . . Just as we have borne the image of the man of dust, we shall also bear the image of the man of heaven” (1 Corinthians 15:22, 45, 49).
There can be no doubt that Paul understood Adam to be just as real as Jesus. But if in fact Adam never existed or was just a hominid plucked from the evolutionary tree, then Paul’s entire case for the Gospel makes no sense. There’s no fall of humanity, no original sin, and no need or possibility of redemption.
Tim Keller addresses the inconsistent idea that Paul’s argument holds up even if he got his facts wrong: “[Paul] most definitely wanted to teach us that Adam and Eve were real historical figures. When you refuse to take a biblical author literally when he clearly wants you to do so, you have moved away from the traditional understanding of the biblical authority. . . . If Adam doesn’t exist, Paul’s whole argument – that both sin and grace work ‘covenantally’ – falls apart. You can’t say that ‘Paul was a man of his time’ but we can accept his basic teaching about Adam. If you don’t believe what he believes about Adam, you are denying the core of Paul’s teaching.”
Old Testament scholar Richard Belcher adds: “If all human beings are not descended from Adam, there is no hope of salvation for them. Christ does not and cannot redeem what he has not assumed. What he has assumed is the human nature of those who bear the image of Adam by natural descent. If there is no redemptive history that is credible, then redemptive history is lost in any meaningful sense. Thus the historicity of Adam has implications for the Gospel.”
And theologian Richard Gaffin is quite blunt in summing up these dire Gospel implications: “The truth of the Gospel stands or falls with the historicity of Adam as the first human being from whom all other human beings descend. What Scripture affirms about creation, especially the origin of humanity, is central to its teaching about salvation.”
The frontiers of science
Naturally none of this has deterred skeptics (and sadly many believers) from assuming that “settled science” has ruled out the possibility of Adam and Eve ever existing, never mind being the progenitors of the entire human race. But science – which at its heart is about discovery and not consensus – has done nothing of the sort. In reality, these bald assertions aren’t based on objective investigation, but on materialist assumptions that dismiss out of hand any non-natural explanations for the origin of life.
Science, of course, can neither prove nor disprove whether Adam and Eve existed, nor does it need to. But studies of genetics, linguistics and the spread of pathogens at least suggest the likelihood that humanity arose relatively recently, in one location, and from a small population, perhaps even from a single pair.
From the field of population genetics, cutting-edge research published in the journal BIO-Complexity has lent strong support for the possibility that humans descend from a single couple, despite frequent claims to the contrary. The authors of the paper, biologist Ann Gauger and mathematician Ola Hössjer, used sophisticated computer modelling to trace the diverse branches of the human genetic tree back to a statistically probable point of origin. Their findings indicate that humanity could easily have originated from a single ancestral couple, as recently as the time when Neanderthals are commonly believed to have appeared on the scene.
Once again, this doesn’t prove the Genesis account, and that was never Gauger and Hössjer’s intention. What they set out to do – and accomplished brilliantly – was to show that contrary to materialist orthodoxy, Adam and Eve are indeed a scientifically feasible explanation for the origin of humanity. Both researchers were forthright about why such a study as theirs had never been pursued before.
Hössjer explained: “Well, the reason is philosophical rather than based on empirical facts. Modern science is very secular. Typically, only those hypotheses are allowed to be tested that can be framed in purely natural terms (i.e. methodological naturalism). A model with a first couple implicitly requires an Intelligent Designer or a Creator in order to answer how this first couple was generated in the first place. Modern science will therefore rule out a first couple model from the start (even if one leaves it to the reader to answer how the first couple originated), before data has been analyzed.”
Gauger was even more to the point: “First of all, who gave scientists the right to interpret Scripture? Why should they care if we believe that we came from a literal first couple? They stuck their noses in where they didn’t belong. Second, they actually didn’t test the thing they were claiming.”
Concluding thoughts
To paraphrase Mark Twain, the reports of Adam and Eve’s non-existence have been greatly exaggerated. As one might expect, nature and Scripture are never at odds with each other. God is the author of both, so there can be no hidden secret, lurking in the natural world, waiting to come to light and prove God’s Word wrong. Of course, it’s vital to interpret both correctly, a principle worth remembering by scientists and theologians alike.
But the historicity of Adam and Eve reaches far beyond drawing proper lines between science and metaphysics. The question impacts the truth of the entire Gospel narrative of Scripture. The creation, fall, redemption and restoration of humanity, the intrinsic value of human life and salvation through Christ, the second Adam, all hinge on the literal existence of the first Adam and his wife Eve, created directly by God in his own image.
Adam and Eve may have borne the shame of plunging humanity into sin and death. However, believers need not be ashamed of the existence of our first parents in the face of skeptical opinion. Quite the contrary, a literal Adam and Eve give us a sense of grounding, humility and assurance for our faith. Their story forms the opening chapter of God’s real, historical narrative through which he’s redeeming his people as well as his entire creation. | If Adam and Eve were pre-existing hominids transformed by God, then humanity’s unique reflection of the Imago Dei is blurred at best and may not even be present to the same degree – or at all – in every individual. And if our first parents never existed, then any objective basis for inherent – and inherited – human worth doesn’t exist either.
Scripture after Genesis
Adam and Eve are mentioned only sporadically in the rest of Scripture after Genesis. But when they are, they’re always presented as actual historical figures. The first book of Chronicles opens with a genealogy of Israel, starting with Adam. In similar fashion, the Gospel of Luke traces the ancestry of Jesus all the way back to Adam. In the book of Acts, Paul tells the skeptical Athenians that God made all human nations from one original man. And when writing to Timothy, the Apostle again refers to Adam and Eve as historical people, as does Jude in his short letter when he quotes Enoch, a seventh generation descendent of Adam.
Jesus himself, while teaching about marriage and divorce in the Gospels of Matthew and Mark, alludes to Adam and Eve as real people. Later, as recorded in Matthew and Luke, the Lord also speaks about the literal murder of Abel, Adam and Eve’s second son. And along the same lines, the writer to the Hebrews describes Abel’s sacrifice as an actual event, and places Adam’s murdered son at the head of his list of heroes of the faith.
It would be hard to deny that the authors of Scripture – and the Lord himself – read the Genesis account as historical narrative and viewed Adam and Eve as historical people. But that hasn’t stopped critics from trying. They’ll argue that these authors and their original readers knew they were talking about ancient myths to convey spiritual truth. Or else they’ll claim that the apostles and evangelists – and even Jesus – were simply wrong.
Such claims, however, don’t bear up under serious scrutiny. Reading these texts honestly and in context makes it clear that the authors intended their audience to know they were talking about real people and real events. In each case, | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://biologos.org/articles/why-i-think-adam-was-a-real-person-in-history | Why I Think Adam was a Real Person in History - Article - BioLogos | Why I Think Adam was a Real Person in History
I have always loved the opening chapters of the Bible. Genesis 1 reveals the broad brush strokes of God’s mighty acts of creation and the declaration by God that all that he made is very good. Genesis 2 begins the story of God’s relationship with a particular couple of people—Adam and Eve[1]—and their offspring.
These accounts of creation are old stories, full of profound mystery and beauty. In acknowledging mystery, I am not saying we can’t understand them. In one sense they are so simple my four-year-old child understands them. Yet countless wise scholars have plumbed their depths for millennia and still haven’t come to the end of their meaning. So it is with a keen sense of my personal limitations—as one trained in the sciences, not in biblical studies, but also as a 21st-century Westerner—that I offer my perspective on Adam and Eve.
I have long wrestled with God over how to think about the Garden story, both on its own and in the context of the New Testament. I have benefited from many wise counselors, both in my church community and through my work here at BioLogos. There are many elements that seem fantastical—man and woman made of dust and rib, God walking in the Garden, a talking serpent, two unusual trees, and an angel with a flaming sword, just to name a few. All of this sparks my imagination and makes me yearn for a time machine. I’d love to go back to the time when God first revealed himself to humankind, even more than I want to see the dinosaurs (which is a lot).
Evolutionary creationists have many views of Adam and Eve
The Bible raises its own difficult questions about Adam, and the evolutionary account of the origin of humanity only adds to the complexity. Scholars have described many possible ways of interpreting Adam as they have explored the theological ramifications if evolution (especially human evolution) is true.
Among Evolutionary Creationists (an admittedly modest slice of the creationist pie), there is quite a diversity of views on Adam and Eve. Since there exist multiple views that are consistent with both the Bible and current science, BioLogos doesn’t elevate one view over another, and Adam and Eve are not referenced in our statement of beliefs. But everybody in our community has a view (or more than one).
Three beliefs are shared by all Evolutionary Creationists with respect to the origin of the first people: 1) the Bible is the inspired and authoritative word of God; 2) the diversity and interrelation of all life on earth (including humans) are best explained by the God-ordained process of evolution with common descent; and 3) God made all people in his image. There is a lot of latitude for how to fit these beliefs together.
Not surprisingly, then, the possibilities for how to think about Adam and Eve are dizzying and not mutually-exclusive. Some believe Adam and Eve were a real, historical couple, recognizing varying degrees of figurative language in the text. Others believe the story is myth (and I mean ‘myth’ in the technical sense: “a story or parable having the main purpose of teaching eternal truths without the constraints of historical particularity”[2]). Some view Adam and Eve as the beginning of Israel, not of humanity as a whole; or as archetypes—people who represent us all; or asliterary, not literal. Each of these latter three views could fit with either of the first two, and with each other, in various ways.
Among Evolutionary Creationists who envision a historical first couple, there are two main views: 1) Adam and Eve lived among a few thousand people at the “headwaters of humanity,” perhaps 200,000 years ago in Africa. 2) Adam and Eve lived just 10,000 years ago (give or take) in Mesopotamia, at a time when people had already spread across the globe. In either of these scenarios, Adam and Eve could be related to us today via a representative relationship or a genealogical one. All Evolutionary Creationists agree that the scientific evidence indicates that the human population has never dipped below a few thousand within the last 200,000 years.
Each Christian’s view of Adam and Eve is informed by a variety of biblical and scientific data as well as by theological tradition and personal intuition. Unfortunately many Christians have been demonized for not taking the “right” view, though I believe the accusers are driven by legitimate concerns. Those who hold to a historical first couple worry that doing away with historicity may lead to the watering down or outright rejection of the doctrine of original sin, or that it may lead to doubts about the historicity of other biblical figures and events. Those who see Adam and Eve as figurative think the literal Adam approach does not fully appreciate the theological richness and cultural backdrop of Genesis. There are scientific and philosophical questions on both sides, not just hermeneutical ones. I’m sympathetic to concerns on both sides. I personally know faithful followers of Christ in most “ecological niches” of the Adam and Eve landscape, and that gives me hope. We have much to gain by rolling up our sleeves together to examine the possibilities.
My view of Adam and Eve
So, what do I think? I prefer to believe that Adam and Eve were a real couple in history who lived in Mesopotamia, among a larger population of people, perhaps around 6,000 B.C.[3]Their bodies bore the marks of millions of years of evolution; they shared common ancestry with others of God’s creatures. Their very distant relatives lived alongside and even occasionally interbred with other hominin species (Neanderthals and Denisovans).
Thriving cities existed when Adam and Eve lived. Art, trade, tools, language, and farming were familiar to their contemporaries. The people of that day bore God’s image, for it was bestowed on them when God brought them into being, and they were already engaged in subduing the earth. Yet they knew him not and did not call on his name, though perhaps they were seeking God and reaching out for him (Acts 17:27).
In the fullness of time, God called two people, Adam and Eve, into a special covenantal relationship with himself, and into a one-flesh unity with each other. They were chosen for a purpose: to begin a family that would include others who were specially chosen—among them Abraham, Moses, David, and many other men and women whose deeds are recorded in Scripture. Ultimately this family, which became the Israelite people, would give rise to Jesus Christ, the ultimate source of blessing to all the nations.
God revealed himself to Adam and Eve in an intimate way. A spiritual birth had taken place: for the first time[4] they knew God and they knew God had a will and so did they. They were selves, free to obey or rebel. He gave them rules and consequences for breaking those rules. And they chose, in their freedom, to rebel.
Whether or not there was an actual piece of fruit involved is interesting but beside the point: they were after what it represented—knowledge of good and evil. They sensed that God was withholding something from them, and they rejected his right to do so. This was the first sin, the first transgression of the law of God. This first or “original” sin brought death in the form of alienation and eternal separation from God. Brokenness, guilt, shame, isolation, and death—all of these we inherit from Adam as our representative (or as theologians would say, our “federal head”). Adam’s sin became our sin.
Hitting closer to home, Adam and Eve’s sin is my sin. When I read Genesis 3, the story we call the Fall (though the Bible never uses that term), my heart aches. It aches because this is my story: I am guilty for Eve’s sin, but also because I sin like Eve. I know in my head that God knows what’s best for me, but I chafe against his timing and against particular circumstances in my life. I crave and take things that God has declared off limits. I tend to love people and things more than God. I can turn a blind eye to injustice. I prefer the standards I have set for myself, rather than God’s standards. In my heart are pride and fear and lust and greed.
For all this and much more, I deserve the curse of death. God’s righteousness demands judgment, and it has come, but not on my head. It has come on the beautiful, bloodied head of Jesus.
I must emphasize again that what I’ve described here is my view, not “the BioLogos view” on Adam and Eve. On any given weekday at the BioLogos office you can find several of us clustered together for a few minutes having a feisty argument about a particular Bible verse (or about apologetic approaches, God’s sovereignty vs. human freedom, hermeneutics, inerrancy, the moral status of early hominids, theories of atonement, and so on). It isn’t tense (most of the time) because we’re iron sharpening iron. We are brothers and sisters trying to work out our best understanding of the Bible and science.
My view on Adam will be considered too narrowly conservative to many people I know and love. It will be considered dangerously progressive by others. But perhaps a few people will find it helpful. It isn’t novel by any means, but it may be new to many readers. So I’d like to anticipate and respond to some questions that may come to mind.
Wouldn’t it be easier to simply reject human evolution?
Many Christians accept evolution of plants and animals but draw the line at humans. Why don’t I? Because I have encountered compelling evidence from multiple scientific disciplines that supports common ancestry of humans with other animals.[5] While it might be convenient in church circles to dismiss or downplay this evidence, to do so would violate my integrity. I simply must testify to what I believe the created order is revealing about itself and about God.
If accepting evolution meant I had to reject core doctrines of the Christian faith, or deny the authority of Scripture, I wouldn’t do it. As outlined above, I affirm the doctrine of creation and the doctrine of original sin. I also affirm humankind’s being made in God’s image and human uniqueness. I believe God’s word is our ultimate authority in matters of life and faith. Evolution may conflict with certain interpretations of Scripture or with certain doctrinal theories, but other interpretations and theories remain viable.
If I accept evolution, then why do I accept a historical Adam?
In my experience, many people assume that if you accept human evolution, then you reject a historical Adam and Eve, and if you accept a historical Adam and Eve, then they can’t be part of the evolutionary story. But as explored above, there are ways of reconciling historicity of Adam and human evolution.
It’s conceivable that the author of Genesis never had a real couple and real events in mind when he wrote Genesis 2-3, or that he did but was mistaken, and was only a “man of his time.” Intuitively, however, I feel that it is much more likely that real events involving a real couple were “mythologized” as they were told and retold over many generations. As Kenneth Kitchen writes,
The ancient Near East did not historicize myth (i.e. read it as imaginary “history”.) In fact, exactly the reverse is true—there was, rather, a trend to “mythologize” history, to celebrate actual historical events and people in mythological terms….[6]
Through all the fantastical bits of the Adam and Eve story, I read a historical narrative. This narrative, told in ancient Near Eastern, figurative, archetypal language, seems to have real events in mind. This impression helps me make sense of the enduring power of the opening chapters in Genesis. No mere piece of fiction has ever explained the human condition more simply and profoundly than Genesis 3.
A related reason why I prefer a historical Adam and Eve scenario is that beyond Genesis, multiple biblical writers seem to talk about Adam as if he was a real person. Adam is important in Paul’s theology; he compares Adam and Christ in both Romans 5 and 1 Corinthians 15. It could be argued that Paul’s point still holds if Adam is simply a character in a story. I’m sympathetic to this. I think of Frodo Baggins, Elizabeth Bennett, Precious Ramotswe, Atticus Finch, and a great many others as “real” in an important sense. They are dear to me and I’ve learned important things from them. But it’s hard for me to imagine that Paul would base important theology on a literary character[7]—or worse, an imaginary one he thought was real.
Here’s another example: when talking about marriage and divorce, Jesus quotes Genesis 1:27 and 2:24. He says,
Haven’t you read, he replied, that at the beginning the Creator “made them male and female,” and said, “For this reason a man will leave his father and mother and be united to his wife, and the two will become one flesh”? So they are no longer two, but one flesh. Therefore what God has joined together, let no one separate. (Matt. 19:4-6)
Interestingly Jesus does not refer to Adam and Eve by name, but the one-flesh idea is directly tied to Genesis 1 and 2. It may be that Jesus was affirming the one-fleshness of marriage, and not explicitly affirming a historical couple, but for me it adds to the weight of scriptural witness on this point.
There are also references to Adam in genealogies (1 Chron. 1, Luke 3, Jude). Genealogies do a lot of theological heavy lifting that the original audience would have noticed immediately; they are not simply lists of father-son relationships. So this is a less impressive data point but a data point nevertheless.
Now, having argued for a historical Adam, I need to point out that the Gospel does not fall apart in non-historical scenarios. My friends and colleagues who embrace non-historical views of Adam recognize their own sinfulness and need for salvation, and they see the historical life, death, and resurrection of Jesus as the answer to that need.
If I accept a historical Adam, why not also accept a special, de novo creation of Adam?
It might be construed as cherry-picking to accept a literal Adam but reject the vivid description of Adam and Eve being formed of dust and rib. Indeed, one recent proposal put forward by Joshua Swamidass envisions just such a de novo, special creation of Adam and Eve, whose offspring interbred with biologically compatible beings outside the Garden, who were created by an evolutionary process. Any such genetic evidence would not be expected to be preserved. This approach has the hermeneutical advantage (according to some) of de novo creation of a single pair, while at the same time allowing for evolution of the rest of humanity.
Science is silent here: it doesn’t point to this possibility, nor does it rule it out. God could have miraculously created Adam and Eve in this way, but it doesn’t seem necessary to me in order to affirm a historical pair. Among other things, one wonders why God would make two individuals who are presumably biologically identical to other humans at that time. While I am not yet personally persuaded, this proposal is creative and deserves further reflection.
For my part, I am indebted to a number of Bible scholars[8] who have persuasively argued that Genesis 2 is not intended to be a blow-by-blow account of how God fashioned two people’s bodies from dust and rib.
Consider Genesis 2:7: “the LORD God formed the man of dust from the ground.” As Old Testament scholar John Walton has pointed out,[9] the word is very clearly dust, not clay. But dust cannot be formed into a shape. More likely this is a reference to Adam’s mortality. Psalm 103:14 states, “for he knows how we are formed, he remembers that we are dust.” If we who were born in the normal biological way are made of dust, why is Adam’s body necessarily different? It may be that Adam was born of a woman and is also made of dust, just as the Bible indicates repeatedly elsewhere.
As for Eve, Walton points out that the Hebrew word for “rib” could also be translated “side.” Eve is literally Adam’s other half. Also, Adam says Eve is “bone of my bones and flesh of my flesh.” I have been intrigued in my journey through the Old Testament this year to discover that the English Standard Version includes multiple references to someone being “my bone and my flesh.” This is a recognition of a close familial relationship and the unity that it represents. This is also what we have in marriage, where the two become one flesh (Gen. 2:24).
Why do I think Adam and Eve are representatives, and not necessarily sole progenitors, of all humanity?
One reason I don’t believe Adam and Eve were the sole progenitors[10] of all humanity is because the Bible itself gives hints that there were other people around when Adam and Eve lived. When their son Cain murdered his brother Abel and was cursed to wander, he was terrified: “whoever finds me will kill me” (Gen. 4:14). Of whom was he afraid? Surely not his own family. Also, Cain has a wife: are we prepared to say she was his sister? And when he builds a city, is it just for his small family? No, there seem to be lots of other people in view.
I don’t think ancestry is unimportant. Even in a recent-Adam scenario such as I’ve described, Adam and Eve could be ancestors of us all: there have likely been “many individuals, and potentially couples, across the globe who are each individually genealogical ancestors of all those alive when recorded history began.”[11]It must be emphasized, though, that there has never been a time in the past couple hundred thousand years when our ancestral population was as small as two.[12]
I have difficulty with scenarios that locate a non-Homo sapiens first couple in the ancient past (i.e., more than a few tens of thousands of years ago). The biblical genealogies would have to be missing massive numbers of generations, and the setting of the Garden near the Tigris and Euphrates (modern-day Iraq) would seem to be at odds with the African origin of humanity as pictured by current science.
While sole progenitorship of Adam and Eve is a hill to die on for many Christians, I suspect it may be a red herring. When I read Romans 5 and 1 Corinthians 15, which are the two main places where Paul compares Adam and Christ, I can’t help but notice that our salvation in Christ does not depend in the least on our having a genetic or genealogical relationship with him (Jesus had no children, after all). Yet his righteousness is imputed to us all the same. So if Adam is a “pattern of the one to come [Christ],” it seems to me that Adam’s sin does not necessarily depend on being passed down in some genetic or genealogical sense. No, the logic in this First Adam–Second Adam comparison is about representation.
I will close with some words from N.T. Wright who beautifully articulates a representative view of Adam and Eve:
…just as God chose Israel from the rest of humankind for a special, strange, demanding vocation, so perhaps what Genesis is telling us is that God chose one pair from the rest of early hominids for a special, strange, demanding vocation. This pair (call them Adam and Eve if you like) were to be the representatives of the whole human race, the ones in whom God’s purpose to make the whole world a place of delight and joy and order, eventually colonizing the whole creation, was to be taken forward. God the creator put into their hands the fragile task of being his image bearers.[13]
I may be wrong, but if so I am in good company. For a topic with as many possibilities as this one, the only thing we can be sure of is that most of us are wrong!
Editor’s Note: Minor edits were made June 12. Thanks to Joshua Swamidass for helpful comments.
[1] The Hebrew adam can mean humanity as a whole or the person we call Adam. As Denis Alexander notes, “there is some ambiguity in the use of the word adam, perhaps an intentional ambiguity, which makes it quite difficult to know when ‘Adam’ is first used as a personal name” (Denis Alexander, Creation or Evolution: Do We Have to Choose, 2nd ed., 2008, 227).
[2] Alexander, 288.
[3] I say “right now” because I am open to changing my mind on the basis of further study and reflection. I say “prefer” because my view is just that—a preference—not an ironclad belief.
[4] Some may wonder how Adam and Eve could be the first to know God or become “selves” for the first time if other people around already bore his image. Surely Adam and Eve’s Neolithic contemporaries (in this scenario) were fully human in the biological and cultural sense. They were moral creatures and understood themselves as “selves.” But Adam and Eve (in this scenario) were the first to be spiritually alive and thus became a new kind of human—what John Stott termed Homo divinus. As Denis Alexander has pointed out, “Being an anatomically modern human was necessary but not sufficient for being spiritually alive; as remains the case today. Homo divinus were the first humans who were truly spiritually alive in fellowship with God, providing the spiritual roots of the Jewish faith.” It may be that the image of God wasn’t introduced until Adam and Eve, but I see no reason why that needs to have been the case. Just as today we affirm that all people have dignity because all are made in God’s image, we can extend that back to all people who have ever lived, even those who lived before Adam and Eve.
[6] This quote appears in Tim Keller’s oft-cited BioLogos paper, “Creation, Evolution, and Christian Laypeople.” The original can be found in K.A. Kitchen, On the Reliability of the Old Testament, 2003, 425.
[7] Conservative scholar Tremper Longman points out that comparing Jesus with a literary figure would not have been uncommon in Paul’s day (Wrestling with the Old Testament, forthcoming). Scot McKnight has also explored Paul’s use of Adam as a literary character.
[8] In my years at BioLogos I have appreciated interactions with and writings by Jack Collins, Pete Enns, Tim Keller, Denis Lamoureux, Tremper Longman, Scot McKnight, Richard Middleton, Bruce Waltke, John Walton, and N.T. Wright, as well as many interdisciplinary scholars and scientists.
[9] John H. Walton, The Lost World of Adam and Eve, 2015. See discussion in Proposition 8.
[10] By “sole progenitors” I mean the idea that Adam and Eve were the only two people from whom all other people descended. Swamidass defines “human” and “sole progenitorship” differently; see Peaceful Science for his view. There is legitimate ambiguity both scientifically and theologically over how to define these terms.
[12] To find a time when the population leading to modern humans might have been as small as two individuals, we would have to go back in time 500,000 years or more (unpublished; see this summary by Richard Buggs). It must be emphasized that a population bottleneck like this has not been shown to have occurred; we simply don’t have the scientific resolution to say it’s impossible. This far back in time, such a couple would not be modern Homo sapiens (which have only existed for about 200,000 years), but members of another hominin species.
About the author
Kathryn Applegate is a former Program Director at BioLogos. While working on her PhD in computational cell biology at Scripps Research (La Jolla, CA), she became passionate about building bridges between the church and the scientific community. In 2010, she joined the BioLogos staff where she has the privilege of writing, speaking, and working with a wide variety of scholars and educators to develop new science and faith resources. Kathryn co-edited with Jim Stump How I Changed My Mind About Evolution (InterVarsity Press, 2016).
Among many other projects during her time at BioLogos, Kathryn most recently led the development of Integrate, a new science and faith curriculum for home educators and teachers at Christian schools. Kathryn and her family enjoy exploring the beaches and state parks of Michigan and are helping to plant a new PCA church in Grand Rapids.
Related resources
If you enjoyed this article, we recommend you check out the following resources: | Unfortunately many Christians have been demonized for not taking the “right” view, though I believe the accusers are driven by legitimate concerns. Those who hold to a historical first couple worry that doing away with historicity may lead to the watering down or outright rejection of the doctrine of original sin, or that it may lead to doubts about the historicity of other biblical figures and events. Those who see Adam and Eve as figurative think the literal Adam approach does not fully appreciate the theological richness and cultural backdrop of Genesis. There are scientific and philosophical questions on both sides, not just hermeneutical ones. I’m sympathetic to concerns on both sides. I personally know faithful followers of Christ in most “ecological niches” of the Adam and Eve landscape, and that gives me hope. We have much to gain by rolling up our sleeves together to examine the possibilities.
My view of Adam and Eve
So, what do I think? I prefer to believe that Adam and Eve were a real couple in history who lived in Mesopotamia, among a larger population of people, perhaps around 6,000 B.C.[3]Their bodies bore the marks of millions of years of evolution; they shared common ancestry with others of God’s creatures. Their very distant relatives lived alongside and even occasionally interbred with other hominin species (Neanderthals and Denisovans).
Thriving cities existed when Adam and Eve lived. Art, trade, tools, language, and farming were familiar to their contemporaries. The people of that day bore God’s image, for it was bestowed on them when God brought them into being, and they were already engaged in subduing the earth. Yet they knew him not and did not call on his name, though perhaps they were seeking God and reaching out for him (Acts 17:27).
In the fullness of time, God called two people, Adam and Eve, into a special covenantal relationship with himself, and into a one-flesh unity with each other. They were chosen for a purpose: to begin a family that would include others who were specially chosen—among them Abraham, Moses, David, and many other men and women whose deeds are recorded in Scripture. | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://answersingenesis.org/adam-and-eve/ | Adam and Eve | Answers in Genesis | Adam and Eve were real people. But because the best accounts of Adam and Eve are found in the Bible, many critics challenge their existence. Throughout the years, opponents to the historicity of Adam and Eve have challenged the biblical record on several fronts. Even Christians, including Bible college and seminary professors, have argued against a historical Adam and Eve. Often because of evolutionary thought, many claim they were mythological or allegorical figures with no basis in actual history. But are they right?
Our aim is to examine Adam and Eve from the Bible, consider some of the theological implications of believing in a real Adam and Eve, and finally address some of the major challenges to the historicity of the first humans.
According to Scripture, Adam and Eve were the first human beings on the planet. In Genesis, we are told God “formed the man of dust from the ground and breathed into his nostrils the breath of life, and the man became a living creature.” (Genesis 2:7). This man, called Adam, was the first human being. But God did not create Adam to be alone. We read further along,
But for Adam there was not found a helper fit for him. So the Lord God caused a deep sleep to fall upon the man, and while he slept took one of his ribs and closed up its place with flesh. And the rib that the Lord God had taken from the man he made into a woman and brought her to the man . . . . The man called his wife’s name Eve, because she was the mother of all living. (Genesis 2:20–22, 3:20)
In the plain reading of Genesis 1–3, we learn that God created the first two people: Adam and Eve. They were placed in the Garden of Eden and given everything they needed: food, work, companionship, and fellowship with God as they walked with him in the cool of the day (Genesis 3:8). It was perfect—almost.
Then something happened.
A serpent entered the Garden of Eden to tempt Adam and Eve. God had given food from every tree in the garden but commanded the man and woman not to eat from the Tree of the Knowledge of Good and Evil. Eve believed the serpent, ate of the fruit, then gave it to Adam who also ate of the fruit (Genesis 3:6).
This event was catastrophic. Now known as the fall, God judged Adam and Eve for disobeying his command. And true to his word, Adam and Eve began the process of death. The Bible tells us that Adam lived for 930 years and then he died (Genesis 5:5).
Adam and Eve died because of the fall, but the fall didn’t just affect them.
Since their sin, every other person born after them was plunged into rebellion against God’s order. That includes you and every other human being you know. And this rebellion is also the reason we die today. Through Adam’s sin, death came into the world (Romans 5:12).
But Jesus demonstrated he has power over death. Jesus came, lived, was crucified, and rose from the dead (Luke 24:46). Those who are in Christ will not have to suffer the eternal consequences of sin. Through Adam sin entered the world, but through Christ, we can be saved from the punishment of sin.
That’s why a historical Adam and Eve are so important. If you deny a real Adam and real Eve, many of the doctrines in the Bible (including the gospel) would be incoherent. On many occasions, New Testament authors connect a historic Adam and Eve to foundational doctrine and it does not make sense if Adam and Eve were mythological
Consider the following passages that refer back to a historic Adam and Eve.
Jesus affirms the special creation of Adam and Eve at the beginning (Mark 10:6).
Luke connects the human lineage of Jesus to Adam (Luke 3:38).
Jesus links the doctrine of marriage to Adam and Eve (Matthew 19:4–6).
Paul connects the doctrine of the church to Adam and Eve (Ephesians 5:30–32).
Paul argues for family order because of Adam and Eve (1 Corinthians 11:8–12).
Paul attaches the origin of sin in the world to Eve (1 Timothy 2:13–14).
Paul also connects death from sin to Adam (Romans 5:12–14).
By connecting to their real existence and activities, the New Testament overwhelmingly affirms the historicity of Adam and Eve. It’s not possible to deny a real Adam and Eve while at the same time believing the rest of the Bible. Hence it is vital to believe in an actual Adam and Eve to maintain a coherent biblical theology.
Paul underscores why a historical Adam was so important. “For as in Adam all die, so also in Christ shall all be made alive.” (1 Corinthians 15:22). The apostle said that we suffered the consequences of our sins because of a real Adam, but also that in a real Christ we can overcome a real death and be reconciled with the real God.
But not everyone believes the Bible about Adam and Eve. There are many attempted challenges to the history and theology connected to an actual Adam and Eve. Here are some of the more popular disagreements with the Bible’s account of our first parents.
In its popular form, evolutionists argue for the common descent of human beings from other animals or human-like creatures. According to them, Adam and Eve could not have been created in the way a plain reading of Genesis 1–2 suggests, since modern humans evolved from pre-existing creatures.
These critics seek to mythologize or allegorize the narrative of the first few chapters of Genesis. For instance, when recounting the narrative of Adam and Eve, The Washington Postproposed:
First, the story existed as myth, inspired in part by the Babylonian creation story, then Saint Augustine made it fact, and biblical literalism reigned for centuries until the Enlightenment, when representation of the couple in art and literature became so accurate that they seemed too human, too real, and people started asking questions, and before long secularism and science turned the story back into myth.
But is that accurate? This does not read the Bible as an integrated whole.
The biblical authors knew what mythology was. On numerous occasions, they clearly distinguish historical fact from mythology (1 Timothy 4:7; 2 Peter 1:16). So when the Bible itself argues for the historicity of Adam and Eve as Jesus, Luke, and Paul did (see above), it affirms the historicity of Adam and Eve. Later, teachers may have also affirmed the reality of Adam and Eve, but so did the authors of Scripture before them. That’s why a mythological or merely allegorical Adam and Eve does not match the rest of biblical teaching.
Another similar challenge to the historicity of Adam and Eve relates to the genre of Genesis. In other words, we shouldn’t believe Adam and Eve were real people because we find the primary account of their lives in a poetic section of Scripture. According to this view, Adam and Eve were merely poetic devices.
Like the first challenge, this argument doesn’t work for the same reasons. Jesus and other New Testament authors refer to Adam and Eve as historical figures. That means the biblical authors read the first chapters of Genesis as history. To deny the historicity of Adam and Eve by turning them into literary characters, one also must deny what Jesus taught (Mark 10:6).
Dr. Terry Mortensen also agrees when he said:
The early chapters of Genesis are not poetry, a series of parables or prophetic visions, or mythology. The chapters recount God’s acts in time-space history: acts of creation, providence, and redemption.
When we insist that Genesis 1–11 is history, we are not saying that this section of the Bible is only history, i.e., that it was only inspired to satisfy some of our curiosity about origins. It is far more than history, for it teaches theology, morality, and redemption, and those truths are vitally important. But Genesis 1–11 is not less than history, and what it teaches on the latter themes is rooted in that history.
Yet another different challenge is an internal critique of Genesis. God promised Adam and Eve a certain judgment if they ate the forbidden fruit:
[B]ut of the tree of the knowledge of good and evil you shall not eat, for in the day that you eat of it you shall surely die. (Genesis 2:17)
How is one supposed to reconcile the fact that God promised a particular judgment for a particular sin with a judgment that was not fulfilled? Put another way, why didn’t Adam and Eve die immediately when they ate the fruit? Bodie Hodge addresses this supposed contradiction. . .
The Hebrew is, literally, die-die (muwth-muwth) with two different verb tenses (dying and die), which can be translated as “surely die” or “dying you shall die.” This indicates the beginning of dying, an ingressive sense, which finally culminates with death.
At that point, Adam and Eve began to die and would return to dust (Genesis 3:19). If they were meant to die right then, the text should have simply used muwth only once, which means “dead, died, or die” and not beginning to die or surely die (as muwth-muwth is used in Hebrew). Old Testament authors understood this and used it in such a fashion, but we must remember that English translations can miss some of the nuances.
Still others have challenged the historical record in that Adam could not possibly have named all the animal species in one day. Adam named all the animals on day 6. And he must have named all the animals in one day, because that was the rationale for God creating the woman (Genesis 2:20)—since she was also created on day 6. According to Scripture,
Now out of the ground the Lord God had formed every beast of the field and every bird of the heavens and brought them to the man to see what he would call them. And whatever the man called every living creature, that was its name. (Genesis 2:19)
There are currently millions of species of animals on the earth. The question is how could Adam have named all the animals on a single day? The narrative strains at believability if we are to believe Adam named millions of animals in less than a day.
But consider the situation.
In the first day, Adam named all the animals created at that time. It is likely that Adam had to name only a couple thousand proto-species of land animals—a task which could easily have been achieved in a few hours. Assuming Adam had to name 2,500 proto-species, or kinds (mostly at the family level of taxonomy), at 5 seconds per animal, it would have taken him approximately three and a half hours to complete the task. This was very doable, even for a person today.
Others have challenged the possibility of a historic Adam and Eve on evolutionary grounds. They contend that human DNA is so similar to chimpanzee DNA, thus making our first parents unnecessary. Mainstream numbers change, but studies have suggested that human and chimp DNA is about 95–99% similar. Is that possible?
Although the DNA studies do confirm similarities between human and chimp DNA, the latest research by geneticist Jeffrey Tomkins puts the similarities closer to 80-88% compared to 95-99%. And that number may be modified even further as more research comes to light.
Taking the more reliable results provided by the earlier BLASTN version corroborated by the whole chromosome alignments of Nucmer obtained in this study, it is likely that the 88% similarity number is considerably more accurate than other methods to date. Additionally, studies show that chimpanzees have a genome size about 8% larger than humans, so “the actual genome similarity with human, even using the high end estimate of 88% for just the alignable regions, is realistically only about 80% or less when the cytogenetic data is taken into account,” according to the latest Tomkins study.
Other Scientific Evidences for a Historic Adam and Eve
The Bible affirms the historic accounts of Adam and Eve. However, there’s even more scientific literature that confirms the historicity of the first parents. In a well-documented presentation, Dr. Georgia Purdom outlines several reasons we should trust the Bible when it talks about Adam and Eve.
Conclusion: Why Are Adam and Eve So Important?
In conclusion, the Bible confirms the historicity of Adam and Eve. They were real people, specially created by God, and every person born is related to them. Moreover, there is nothing in scientific research that has been able to disprove the existence of Adam and Eve.
Most importantly, the gospel is dependent upon an actual Adam and Eve. Both in the New Testament (Romans 5:12–14) and in the Old Testament, we see a historical Adam and Eve connected to the promised Messiah. When God judged the serpent after the fall, there was a glimmer of hope that he hid in the judgment:
I will put enmity between you and the woman, and between your offspring and her offspring; he shall bruise your head, and you shall bruise his heel. (Genesis 3:15)
Theologians understand this “offspring” of the woman to be none other than Jesus Christ. And just as Eve found hope in God’s promised offspring, so also we look to God’s promised offspring of redemption, the Lord Jesus Christ. | Adam and Eve were real people. But because the best accounts of Adam and Eve are found in the Bible, many critics challenge their existence. Throughout the years, opponents to the historicity of Adam and Eve have challenged the biblical record on several fronts. Even Christians, including Bible college and seminary professors, have argued against a historical Adam and Eve. Often because of evolutionary thought, many claim they were mythological or allegorical figures with no basis in actual history. But are they right?
Our aim is to examine Adam and Eve from the Bible, consider some of the theological implications of believing in a real Adam and Eve, and finally address some of the major challenges to the historicity of the first humans.
According to Scripture, Adam and Eve were the first human beings on the planet. In Genesis, we are told God “formed the man of dust from the ground and breathed into his nostrils the breath of life, and the man became a living creature.” (Genesis 2:7). This man, called Adam, was the first human being. But God did not create Adam to be alone. We read further along,
But for Adam there was not found a helper fit for him. So the Lord God caused a deep sleep to fall upon the man, and while he slept took one of his ribs and closed up its place with flesh. And the rib that the Lord God had taken from the man he made into a woman and brought her to the man . . . . The man called his wife’s name Eve, because she was the mother of all living. (Genesis 2:20–22, 3:20)
In the plain reading of Genesis 1–3, we learn that God created the first two people: Adam and Eve. They were placed in the Garden of Eden and given everything they needed: food, work, companionship, and fellowship with God as they walked with him in the cool of the day (Genesis 3:8). It was perfect—almost.
Then something happened.
A serpent entered the Garden of Eden to tempt Adam and Eve. God had given food from every tree in the garden but commanded the man and woman not to eat from the Tree of the Knowledge of Good and Evil. | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://www.thegospelcoalition.org/blogs/kevin-deyoung/reasons-to-believe-in-a-historical-adam/ | 10 Reasons to Believe in a Historical Adam | To All The World
The world is a confusing place right now. We believe that faithful proclamation of the gospel is what our hostile and disoriented world needs. Do you believe that too? Help TGC bring biblical wisdom to the confusing issues across the world by making a gift to our international work.
More By Kevin
In recent years, several self-proclaimed evangelicals, or those associated with evangelical institutions, have called into question the historicity of Adam and Eve. It is said that because of genomic research we can no longer believe in a first man called Adam from whom the entire human race has descended.
I’ll point to some books at the end which deal with the science end of the question, but the most important question is what does the Bible teach. Without detailing a complete answer to that question, let me suggest ten reasons why we should believe that Adam was a true historical person and the first human being.
1. The Bible does not put an artificial wedge between history and theology. Of course, Genesis is not a history textbook or a science textbook, but that is far from saying we ought to separate the theological wheat from the historical chaff. Such a division owes to the Enlightenment more than the Bible.
2. The biblical story of creation is meant to supplant other ancient creation stories more than imitate them. Moses wants to show God’s people “this is how things really happened.” The Pentateuch is full of warnings against compromise with the pagan culture. It would be surprising, then, for Genesis to start with one more mythical account of creation like the rest of the ANE.
3. The opening chapters of Genesis are stylized, but they show no signs of being poetry. Compare Genesis 1 with Psalm 104, for example, and you’ll see how different these texts are. It’s simply not accurate to call Genesis poetry. And even if it were, who says poetry has to be less historically accurate?
4. There is a seamless strand of history from Adam in Genesis 2 to Abraham in Genesis 12. You can’t set Genesis 1-11 aside as prehistory, not in the sense of being less than historically true as we normally understand those terms. Moses deliberately connects Abram with all the history that comes before him, all the way back to Adam and Eve in the garden.
7. The weight of the history of interpretation points to the historicity of Adam. The literature of second temple Judaism affirmed an historical Adam. The history of the church’s interpretation also assumes it.
8. Without a common descent we lose any firm basis for believing that all people regardless of race or ethnicity have the same nature, the same inherent dignity, the same image of God, the same sin problem, and that despite our divisions we are all part of the same family coming from the same parents.
9. Without a historical Adam, Paul’s doctrine of original sin and guilt does not hold together.
10. Without a historical Adam, Paul’s doctrine of the second Adam does not hold together.
Christians may disagree on the age of the earth, but whether Adam ever existed is a gospel issue. Tim Keller is right:
[Paul] most definitely wanted to teach us that Adam and Eve were real historical figures. When you refuse to take a biblical author literally when he clearly wants you to do so, you have moved away from the traditional understanding of the biblical authority. . . .If Adam doesn’t exist, Paul’s whole argument—that both sin and grace work ‘covenantally’—falls apart. You can’t say that ‘Paul was a man of his time’ but we can accept his basic teaching about Adam. If you don’t believe what he believes about Adam, you are denying the core of Paul’s teaching. (Christianity Today June 2011)
Kevin DeYoung (PhD, University of Leicester) is senior pastor of Christ Covenant Church (PCA) in Matthews, North Carolina, and associate professor of systematic theology at Reformed Theological Seminary (Charlotte). He is the author of more than 20 books and a popular columnist, blogger, and podcaster. Kevin’s work can be found on clearlyreformed.org. Kevin and his wife, Trisha, have nine children. | Such a division owes to the Enlightenment more than the Bible.
2. The biblical story of creation is meant to supplant other ancient creation stories more than imitate them. Moses wants to show God’s people “this is how things really happened.” The Pentateuch is full of warnings against compromise with the pagan culture. It would be surprising, then, for Genesis to start with one more mythical account of creation like the rest of the ANE.
3. The opening chapters of Genesis are stylized, but they show no signs of being poetry. Compare Genesis 1 with Psalm 104, for example, and you’ll see how different these texts are. It’s simply not accurate to call Genesis poetry. And even if it were, who says poetry has to be less historically accurate?
4. There is a seamless strand of history from Adam in Genesis 2 to Abraham in Genesis 12. You can’t set Genesis 1-11 aside as prehistory, not in the sense of being less than historically true as we normally understand those terms. Moses deliberately connects Abram with all the history that comes before him, all the way back to Adam and Eve in the garden.
7. The weight of the history of interpretation points to the historicity of Adam. The literature of second temple Judaism affirmed an historical Adam. The history of the church’s interpretation also assumes it.
8. Without a common descent we lose any firm basis for believing that all people regardless of race or ethnicity have the same nature, the same inherent dignity, the same image of God, the same sin problem, and that despite our divisions we are all part of the same family coming from the same parents.
9. Without a historical Adam, Paul’s doctrine of original sin and guilt does not hold together.
10. Without a historical Adam, Paul’s doctrine of the second Adam does not hold together.
Christians may disagree on the age of the earth, but whether Adam ever existed is a gospel issue. Tim Keller is right:
[Paul] most definitely wanted to teach us that Adam and Eve were real historical figures. | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://www.thegospelcoalition.org/article/sinned-in-a-literal-adam-raised-in-a-literal-christ/ | Sinned in a Literal Adam, Raised in a Literal Christ | To All The World
The world is a confusing place right now. We believe that faithful proclamation of the gospel is what our hostile and disoriented world needs. Do you believe that too? Help TGC bring biblical wisdom to the confusing issues across the world by making a gift to our international work.
Compared to other questions laypeople ask pastors about creation and evolution, I find the concerns of this question much more well-grounded. Indeed, I must disclose, I share them. Many orthodox Christians who believe God used evolutionary biological processes to bring about human life not only do not take Genesis 1 as history, but also deny that Genesis 2 is an account of real events. Adam and Eve, in their view, were not historical figures but an allegory or symbol of the human race. Genesis 2, then, is a symbolic story or myth that conveys the truth that human beings all have and do turn away from God and are sinners.
Before I share my concerns with this view, let me make a clarification. One of my favorite Christian writers (that’s putting it mildly), C. S.Lewis, did not believe in a literal Adam and Eve, and I do not think the lack of such belief means he cannot be saved. But my concern is for the church corporately and for its growth and vitality over time. Will the loss of a belief in the historical fall weaken some of our historical, doctrinal commitments at certain crucial points? Here are two points where that could happen.
The Trustworthiness of Scripture
The first basic concern has to do with reading the Bible as a trustworthy document. Traditionally, Protestants have understood that the writers of the Bible were inspired by God and that, therefore, discerning the human author’s intended meaning is the way that we discern what God is saying to us in a particular text.[1]
What, then, were the authors of Genesis 2-3 and of Romans 5, who both speak of Adam, intending to convey? Genesis 2-3 does not show any of signs of “exalted prose narrative” or poetry. It reads as the account of real events; it looks like history. This doesn’t mean that Genesis (or any text of the Bible) is history in the modern, positivistic sense.
Ancient writers who were telling about historical events felt free to dischronologize and compress time frames—to omit enormous amounts of information that modern historians would consider essential to give “the complete picture.” However, ancient writers of history still believed that the events they were describing actually happened. Ancient writers also could use much figurative and symbolic language. For example, Bruce Waltke points out that when the psalmist says, “You knit me together in my mother’s womb” (Ps 139:13), he was not saying that he hadn’t developed in the perfectly normal biological ways. It is a figurative way to say that God instituted and guided the biological process of human formation in his mother’s womb. So when we are told that God “formed Adam from the dust of the ground” (Gen 2:7), the author might be speaking figuratively in the same way, meaning that God brought man into being through normal biological processes.[2] Hebrew narrative is incredibly spare—it is only interested in telling us what we need to know to learn the teaching the author wants to convey.
Despite the compression, omissions, and figurative language, are there signs in the text that this is a myth and not an historical account? Some say that we must read Genesis 2-11 in light of other ancient creation myths of the Near Eastern world. Since other cultures were writing myths about events like the creation of the world and the great flood, this view goes, we should recognize that the author of Genesis 2-3 was probably doing the same thing. In this view, the author of Genesis 2-3 was simply recounting a Hebrew version of the myth of creation and flood. He may even have believed that the events did happen, but in that he was merely being a man of his time.
Kenneth Kitchen, however, protests that this is not how things worked. The prominent Egyptologist and evangelical Christian, when responding to the charge that the flood narrative (Gen 9) should be read as “myth” or “proto-history” like the other flood-narratives from other cultures, answered:
The ancient Near East did not historicize myth (i.e. read it as imaginary “history”). In fact, exactly the reverse is true—there was, rather, a trend to “mythologize” history, to celebrate actual historical events and people in mythological terms. [3]
In other words, the evidence is that Near Eastern “myths” did not evolve over time into historical accounts, but rather historical events tended to evolve over time into more mythological stories. Kitchen’s argument is that, if you read Genesis 2-11 in light of how ancient Near Eastern literature worked, you would conclude, if anything, that Genesis 2-11 were “high” accounts, with much compression and figurative language, of events that actually happened. In summary, it looks like a responsible way of reading the text is to interpret Genesis 2-3 as the account of an historical event that really happened.
Consider the New Testament
The other relevant text here is Romans 5:12ff, where Paul speaks of Adam and the fall. It is even clearer that Paul believed that Adam was a real figure. N. T. Wright, in his commentary on Romans says:
Paul clearly believed that there had been a single first pair, whose male, Adam, had been given a commandment and had broken it. Paul was, we may be sure, aware of what we would call mythical or metaphorical dimensions to the story, but he would not have regarded these as throwing doubt on the existence, and primal sin, of the first historical pair.[4]
If you don’t believe Adam and Even were literal but realize the author of Genesis was probably trying to teach us that they were real people who sinned—Paul certainly was—then you have to face the implications for how you read Scripture. You may say, “Well, the biblical authors were ‘men of their time’ and were wrong about something they were trying to teach readers.” The obvious question is, “How will we know which parts of the Bible to trust and which not?”
The key for interpretation is the Bible itself. I don’t think the author of Genesis 1 wants us to take the “days” literally, but it is clear that Paul definitely does want readers to take Adam and Eve literally. When you refuse to take a biblical author literally when he clearly wants you to do so, you have moved away from the traditional understanding of biblical authority.
Sin and Salvation
Some may respond, “Even though we don’t think there was a literal Adam, we can accept the teaching of Genesis 2 and Romans 5, namely that all human beings have sinned and that through Christ we can be saved. So the basic biblical teaching is intact, even if we do not accept the historicity of the story of Adam and Eve.” I think that assertion is too simplistic.
The Christian gospel is not good advice, but good news. It is not directions on what we should do to save ourselves but rather an announcement of what has been done to save us. The gospel is that Jesus has done something in history so that, when we are united to him by faith, we get the benefits of his accomplishment, and so we are saved. As a pastor, I often get asked how we can get credit for something that Christ did. The answer does not make much sense to modern people, but it makes perfect sense to ancient people. It is the idea of being in “federation” with someone, in a legal and historical solidarity with a father, or an ancestor, or another family member or a member of your tribe. You are held responsible (or you get credit) for what that other person does. Another way to put it is that you are in a covenant relationship with the person. An example is Achan, whose entire family is punished when he sins (Josh 7.) The ancient and biblical understanding is that a person is not “what he is” simply through his personal choices. He becomes “what he is” through his communal and family environment. So if he does a terrible crime—or does a great and noble deed—others who are in federation (or in solidarity, or in covenant with him) are treated as if they had done what he had done.
This is how the gospel salvation of Christ works, according to Paul. When we believe in Jesus, we are “in Christ” (one of Paul’s favorite expressions, and a deeply biblical one.) We are in covenant with him, not because we are related biologically but through faith. So what he has done in history comes to us.
What has all this to do with Adam? A lot. Paul makes the same point in 1 Corinthians 15 about Adam and Christ that he does in Romans 5.
For since death came through a man, the resurrection of the dead comes also through a man. For as in Adam all die, so in Christ all will be made alive (1 Cor 15:21-22).
When Paul says we are saved “in Christ” he means that Christians have a covenantal, federal relationship with Christ. What he did in history is laid to our account. But in the same sentence Paul says that all human beings are similarly (he adds the word “as” for emphasis) “in Adam.” In other words, Adam was a covenantal representative for the whole human race. We are in a covenant relationship with him, so what he did in history is laid to our account.
When Paul speaks of being “in” someone he means to be covenantally linked to them so their historical actions are credited to you. It is impossible to be “in” someone who doesn’t historically exist. If Adam doesn’t exist, Paul’s whole argument—that both sin and grace work “covenantally”—falls apart. You can’t say that Paul was a man of his time but accept his basic teaching about Adam. If you don’t believe what he believes about Adam, you are denying the core of Paul’s teaching.
[1] Granted, often New Testament writers see Messianic meanings in Old Testament prophecies that were doubtless invisible to the OT prophets themselves. Nonetheless, while a biblical author’s writing may have more true meanings than he intended when writing, it may not have less. That is, what the human author meant to teach us cannot be seen as mistaken or now obsolete without surrendering the traditional understanding of Biblical authority and trustworthiness.
[2] See Bruce Waltke, Genesis: A Commentary(Zondervan, 2001) p. 75. Of course, Waltke notes that Psalm 139 is poetry and Genesis 2 is narrative, but that does not mean that prose cannot use figurative speech and poetry literal speech. It only means that poetry uses more figurative and prose less. Another example of a narrative that speaks of the divine power behind a natural process is Acts 12:23. There we are told that Herod Agrippa was delivering a public address to an audience when “an angel of the Lord struck him down, and he was eaten by worms, and died.” Josephus relates that Agrippa did indeed fall ill at the same time, but it was due to a “severe intestinal obstruction.” Here again we see the Bible speaking of God’s action behind a natural biological process.
Tim Keller (MDiv, Gordon-Conwell Theological Seminary; DMin, Westminster Theological Seminary) was founder of Redeemer Presbyterian Church (PCA) in Manhattan, chairman of Redeemer City to City, and co-founder of The Gospel Coalition. He wrote numerous books, including The Reason for God. He and his wife, Kathy, had three children. | [3]
In other words, the evidence is that Near Eastern “myths” did not evolve over time into historical accounts, but rather historical events tended to evolve over time into more mythological stories. Kitchen’s argument is that, if you read Genesis 2-11 in light of how ancient Near Eastern literature worked, you would conclude, if anything, that Genesis 2-11 were “high” accounts, with much compression and figurative language, of events that actually happened. In summary, it looks like a responsible way of reading the text is to interpret Genesis 2-3 as the account of an historical event that really happened.
Consider the New Testament
The other relevant text here is Romans 5:12ff, where Paul speaks of Adam and the fall. It is even clearer that Paul believed that Adam was a real figure. N. T. Wright, in his commentary on Romans says:
Paul clearly believed that there had been a single first pair, whose male, Adam, had been given a commandment and had broken it. Paul was, we may be sure, aware of what we would call mythical or metaphorical dimensions to the story, but he would not have regarded these as throwing doubt on the existence, and primal sin, of the first historical pair.[4]
If you don’t believe Adam and Even were literal but realize the author of Genesis was probably trying to teach us that they were real people who sinned—Paul certainly was—then you have to face the implications for how you read Scripture. You may say, “Well, the biblical authors were ‘men of their time’ and were wrong about something they were trying to teach readers.” The obvious question is, “How will we know which parts of the Bible to trust and which not?”
The key for interpretation is the Bible itself. I don’t think the author of Genesis 1 wants us to take the “days” literally, but it is clear that Paul definitely does want readers to take Adam and Eve literally. | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://drmsh.com/comments-views-historical-adam-reviews/ | Comments on the Four Views of the Historical Adam Reviews - Dr ... | Comments on the Four Views of the Historical Adam Reviews
As promised, I want to post a few thoughts on Nijay Gupta’s posted reviews of the four views book on the Historical Adam. These comments presume you have read the reviews (here and here).
On Lamoureaux’s view (“Evolutionary creation and no historical Adam”):
1. I think Nijay is correct about the fact that biblical scholars ought to avoid being armchair scientists. This is pretty common. Within evangelicalism and fundamentalism there has historically been an antipathy and a heartfelt mistrust of scientists. That shifts to portraying people like Lamoureux, who is a scientist, as biblical ignoramuses. That really doesn’t work here, since Lamoureauz also has a doctorate in theology to go with his science doctorate. So does Alister McGrath. There are others. While we always have to hold out the possibility that the science is being done wrong, it seems quite misguided to conclude that scientists who are strong believers are deliberately deceiving the faithful — hiding the flaws of evolutionary science for some (?) motive. I personally know a good number of PhDs in the hard sciences who are strong believers. They would opt for Lamoureaux’s view (or theistic evolution). Sure, they may lack the sensitivities to theology or the biblical text since they don’t have doctorates in theology as well, but they can defer to those who do. There are plenty of evangelicals who aren’t scientists whose view of how Genesis can or should be interpreted accommodates Lamoureax. The “willful deception” or mistrust of science just seems to go nowhere.
2. Readers know I’m comfortable with the concept of accommodation since it obviously happens in Scripture. Example: We really do know where babies come from and how we get them genetically. A full human person can only exist genetically inside a woman who is pregnant. That is not possible in a man in any natural situation. Hence the writer of Hebrews’ comments about Levi paying tithes to Melchizedek while in the loing of Abraham is a scientifically impossible (not just implausible) statement. But God didn’t bother to correct the pre-scientific misconception as the writer wrote. Why? Because the point being made was a theological one, not a biological one. The passage doesn’t put forth any scientific proposition. It puts forth a theological one. The writer happens to use a scientifically untenable argument to get the point across. That is true regardless of what the passage *theologically* means. God accommodated to the scientific ignorance of the writer. In other words, God didn’t care. The writer still got the job done, as God knew he would. Accommodation is real, and so it’s on the table as a possible way to look at what’s going on with Adam (and other material in the Bible).
3. I might disagree with the way Lamoureax talks about the “entrance” of sin. Even if we don’t have a historical Adam, we still have sin and the Adam story could simply be telling us that it began as soon as we had humanity. In other words, there’s still a conceptual connection in Lamoreauz’s view that I can imagine. I’d need to be in conversation with him to know more precisley what he’s thinking here.
4. I don’t really have Nijay’s angst about the fall since I view the fall as telling us simply “humans lost immortality when they sinned” (and do so unto this day). Due to my view of Romans 5:12, I don’t see the Adam incident dealing with guilt before God being transmitted to all humans. I see the teaching point of the Fall story as all humans are mortal now, apart from God’s presence, where there is eternal life. All humans in that condition who are allowed to live long enough (i.e., they’re born and reach an age of willful acts that transgress Scripture’s moral laws) will invariably sin. The only exception was Jesus, because he was also God. This is all old stuff for Naked Bible readers.
5. The question is “Do we need a historical Adam for Mike’s view?” I would answer that this way. If there was no historical Adam — i.e., that the Adam story is analogous (under inspiration) to the story of Israel (a laSeth Postell, though Pete Enns gets the credit for the view), the teaching point isn’t lost, and neither is the real life reality. In other words, if ALL that is said is that the Adam story is to teach us ideas: (1) human mortality; (2) human inability to avoid sin; (3) the need for cure for sin outside human merit, then it looks to me like our theology is intact without Adam. Again, I’m imagining out loud here — If God’s point in inspiring the Gen 3 account was to tell us these things through a story that had no historical reality — deciding that a story was the best way to get the points across — I don’t see us losing anything. What about Jesus, you ask? He’s still the new (second) Adam. He is the person who reverses the problems we learned we all have through the Adam story. Theologically, it’s all intact. The rub is Paul’s language (“second Adam” sounds like it presumes there was a first – historical – Adam). It is at this point that Enns says “Paul was wrong about Adam and right about Jesus.” I’m not convinced that’s necessary, since the real issue with the Adam and science thing is statistical genetics. The results of Venema’s research have been questioned.1 I still think it’s going to be a long before we know that a single human pair is impossible to reconcile with the genetic record. If that happened, then we could file Paul’s statement under accommodation like we’d (I’d) file the remarks in Hebrews 7.
On Walton’s View (“Archetypal [“Representative”] Adam”)
Here was the excerpt of Walton Nijay used in his review:
[Walton]: In my view, Adam and Eve are historical figures –real people in a real past. Nevertheless, I am persuaded that the biblical text is more interested in them as archetypal figures who represent all of humanity. This is particularly true in the account in Genesis 2 about their formation. I contend that the formation accounts are not addressing their material formation as biological specimens, but are addressing the forming of all of humanity: we are all formed from dust, and we are all gendered halves. If this is true, Genesis 2 is not making claims about biological origins of humanity, and therefore the Bible should not be viewed as offering competing claims against science about human origins. If this is true, Adam and Eve also may or may not be the first humans or the parents of the entire human race. Such an archetypal focus is theologically viable and is well-represented in the ancient Near East.
There’s a lot to like here (at least for me as an ancient Near East guy). It allows for historical figures. The portrayal of those figures presumes the intent of the author was to present an archetype and not be claiming the origin of the entire human race.
1. The difficulty for many would be that the view says that the traditional understanding (Adam is presented as the first of all humans) is a misreading. This view says that it is wrong to interpret Genesis 2-3 as literal history OR as science. And yet Adam and Eve did exist. The intent of the account under inspiration was not to convey literal history in terms of detail and chronology, but neither was it to invent two characters to make a point. They did exist, but what we learn about them here must be filtered through the archetypal intent of the material.
If this view is correct, it wouldn’t be the first time a powerful tradition has erred in its reading of the Bible. I don’t care much about tradition, so this doesn’t bother me.
2. For me, this view is easily married to the Adam as Israel view. I’m not saying John would like the marriage, but it’s do-able. Adam is an “archetype” for Israel as well as humans. It’s easy.
3. Walton’s view helps us realize that the issue must be broken down into smaller questions, each one of which must be worked through to see the options:
Adam and Eve existed. Were they the original human pair, or an original pair in the focus of the biblical writer. (Were they original only for the sake of the story, or for sake of its inspired representative strategy?)
Does the Adam and Eve story aim to tell us about science? The history of humankind? Or something else? What did the author have in his head in terms of the purpose of the material? Could it have been representation of humanity (or Israel) in the context of the rest of Genesis 1-11 and the story of Israel? How can we definitely rule that out?
[Collins]: In this chapter I argue that the best way to account for the biblical presentation of human life is to understand that Adam and Eve were both real persons at the headwaters of humankind. By “biblical presentation,” I refer not only to the story in Genesis and the biblical passages that refer to it, but also to the larger biblical story line, which deals with God’s good creation invaded by sin, for which God has a redemptive plan; of Israel’s calling to be a light to the nations; and of the church’s prospect of successfully bringing God’s light to the whole world. That concerns the unique role and dignity of the human race, which is a matter of daily experience for everyone: All people yearn for God and need him, must depend on him to deal with their sinfulness, and crave a wholesome community for their lives to flourish.
I argue that the nature of the biblical material should keep us from being too literalistic in our reading of Adam and Eve, leaving room for an Earth that is not young, but that the biblical material along with good critical thinking provides certain freedoms and limitations for connecting the Bible’s creation account to a scientific and historical account of human origins. (p. 143).
Nijay makes two notes at the outset with Collins’ view.
He observes that Collins is concerned with Walton’s approach to sin. Collins, per Nijay, would ask: Where did it come from? Does Walton’s approach make clear “the foreignness of sin in God’s plan”
Collins’ use of “headwaters”in the above excerpt distinguishes his view from the earlier two. It requires that Adam and Eve were *the* ancestors of all humans and that what they did brought sin to all humankind.
Despite my view on Romans 5:12, which Collins would not like, it agrees with the second bullet point. My view of Rom 5:12 works with a literal-historical Adam and Eve or archetypal. The human condition extends from them. As noted above, it can work with no historical Adam and Eve if ever pressed into service for that. The teaching points are always the same. Only the referent (a story, an archetype, two real humans) differs. Consequently, Collins’ second concern here is one that doesn’t trouble me.
I’m also not troubled by the first bullet point. Of course sin is foreign (in my view, as applied to any and all of these Adam views). The original humans (literal or otherwise) are portrayed as in God’s presence. There is no sin there, so any sin has to be foreign. I don’t see a problem until we get to the word “plan”. Collins’ is a staunch Calvinist, so it’s natural he can’t process the material in any other way than God predestinating what happened — and that requires the people at the heart of the event to be historical. I don’t have that problem either. The *teaching point* is that sin (my view) came voluntarily, by free will, which must be genuine else we could not be imagers of God, and so our guilt before God is earned by us — we are to blame. That is the human condition (and that approach is viable in all the views).2
On Bill Barrick’s View (Historical Adam, Young Earth Creation View)
Here is Nijay’s excerpt:
In my view Adam is the originating head of the entire human race. Adam’s historicity is foundational to a number of biblical doctrines and is related to the inspiration and authority of Scripture. This traditional view of Adam rejects accommodation to evolutionary science, upholding instead that the Holy Spirit superintended the author of Genesis so that he wrote an objective description of God’s creative activities in six consecutive literal days.
The biblical account represents Adam as a single individual rather than an archetype or the product of biological evolution, and a number of New Testament texts rely on Adam’s historicity. More importantly, without a historical first Adam there is no need for Jesus, the second Adam, to undo the uniqueness of the Genesis record and give it priority over ancient Near Eastern materials and modern science in all discussions of primeval history and the historicity of Adam and Eve.
I see several problems here. That’s a little hard to say since I know Bill. He’s a wonderful guy and very gifted with languages. But there are some coherence gaps here.
1. He writes “a number of New Testament texts rely on Adam’s historicity.” This sounds like everyone who’d argue otherwise is making up how to understand those texts — and should just admit that and reject Christianity. It’s an either-or fallacy. It’s not true that either we adopt Bill’s view or we might as well scrap everything. Bill can reject the other views without this sort of gauntlet, which anyone who reads the other views will see through immediately. It’s unnecessary for articulating a view that would reject any point of modern science that interferes with a straightforward reading of Gen 2-3.
2. No matter what view one takes, the human condition of sin and guilt before God has no other solution than Christ. Again, the language Bill uses is too categorical. No one in this debate is looking for, or claiming to have found, an alternative solution to human guilt before God. Even if the other views are wrong, those who hold them are still depending on Christ for salvation. Their “error” doesn’t cancel out their faith as though it can’t be real apart from a young earth view.
3. Bill talks about undoing “the uniqueness of the Genesis record.” I’m not sure if he has only Gen 2-3 in mind here (that’s the context of his essay) or Genesis in general. In both cases the concern fails. No other ancient Near Eastern creation of humankind account (archetypal or otherwise) has humanity in a guilt relationship before God / the gods linked to the loss of immortality. That detail — accountability that affects the whole human race (again, literal or archetypal) and which needs atonement — is still unique no matter what view one holds of Adam. I fail to see the validity of this point of concern.
This last comment also has sub-problems: Must something be unique to be true? To be inspired? On what grounds? Are the other items in the OT (or just Genesis) that aren’t unique still true? Inspired? You get the idea. It’s logically fallacious. Nothing hinges on “uniqueness.” God can use whatever strategy he likes to communicate something through the people he chose as authors. He doesn’t need to them to writer material that has no connection with the world in which the recipients of the revelation live.
For those who didn’t read my posts of some time ago about Adam and recent human genetics work, see here to catch up. ↩
This is a no-brainer for those of you who’ve read my “Myth” draft; sorry if you’re out of that loop. ↩
21 Comments
What is most imporitant here is the real driving thrust God has communicated through His word, regardless of what view one takes about the origin of sin: we are helpless, hopeless and without the power and ability to re-align ourselves into a righted relationship with God (which is to say our relationship starts out at odds with God), and that only Jesus the Christ was able to accomplish righting it. This truth simply cannot be gotten away from. I’m greatful, Doc, that you emphasize this.
Imagine, if you would, if Christians of all creation-beliefs could aquire the relaxation from understanding this truth and discover the rest, peace and unity we would share among ourselves if this truth were to be the foundation upon which the structure of belief for all origins of sin is grounded. Discussion and not argumentation with each other; respect and not disrespect toward each other; learning from and not stifling of each other…(wait! I don’t think that’s going to happen. I guess that’s one good thing about a post-tribulation view: when the hell hits, none of us is really going to care much, if at all, about whether Eve really did speak to a snake!)
MSH
on January 19, 2014 at 8:00 pm
thanks!
kennethos
on January 19, 2014 at 12:05 pm
Heh…given that Dr. Collins was one of my seminary professors, I have to admit to being a bit biased toward his view. It’s good to read all of these views, to see the reviews, and the bullet points, to what’s possible, even what’s likely, and to try to figure it out and understand. Thanks, Mike.
Thanks for the post. I was pretty intrigued by Walton’s approach when I read the book. I wondered what your take would be? Now, I know.
Quick comment about Bible scholars speaking on science (evolution). If they are well read in the area, I think it can be done. They just have to be careful not to overstate things.
However, the other side has to chill out as well. To argue against the shortcomings of Neo-Darwinism is not anti-science (many scientists do so). It might be anti-establishment, but not anti-science.
The other problem, of course, is that too many on both sides are very poor philosophers. This creates an additional set of problems.
I have always wanted to ask you something – now is as good a time as any. Is there some ANE understanding to be had concerning nakedness and shame? It is relevant to Gen 3. It shows up again with Noah and Ham. It shows up again in Lev. 18.
I ask because before death even entered into the story the idea of nakedness and shame did. It seems pretty significant, but I can’t find much about it. Clearly they were already naked. So after the fall did they now see themselves aging or something? Did they realize they now had to reproduce? Any thoughts?
MSH
on January 24, 2014 at 9:01 am
Short answer to your question: This is largely a textual issue (Noah and Ham). If you email me I can send you what I think is the best treatment of this.
Ron Dupree
on January 21, 2014 at 7:36 am
Thanks for this Mike. It’s helpful for me in thinking about these things.
I hope more Christians will develop more patience, graciousness, and accurate understanding of differing views. I think your intellectual honesty and sound, critical thinking should be considered exemplary even among those who don’t share your perspectives.
MSH
on January 24, 2014 at 9:02 am
Thanks – that’s what we try to do here.
Patrick
on January 21, 2014 at 3:03 pm
As it stands, I tend to see them as historic people still, but, also archetypal for humanity and Israel.
Your view of the (A) “2 creation” stories was compelling to me, so I am convinced they are not meant to be regarded as the (B)”first 2 humans”, therefore I have neither theological nor scientific objections to seeing them this way.
I am convinced guys like Peter Enns haven’t exposed themselves to the possibility of (A) and all it entails and are reacting to (B). They’re determined to mythologize Adam when it may not be needed at all.
MSH
on January 24, 2014 at 9:03 am
I tend to think in the same direction, though I can’t speak for Pete’s “determination”.
Anonymous
on February 2, 2014 at 8:16 am
The questions about Adam don’t bother me as much as they use to because I’m learning more and more about ANE stuff. (Thanks to you Dr H) I’ve found a question and an illustration to be helpful when talking to traditionalists on these topics.
When someone argues there must be a literal beginning to sin, I agree then ask this question. Yes, there must be a literal beginning to sin but, why is almighty God obligated to give us an *exact account* of the event? If God chose to tell us about the fall through a fable, a poem or a dirty limerick, who are we to question it? I think this question forces people to realize they are putting restrictions on God when they insist scripture is a certain genre against all evidence.
The illustration has to do with Paul’s comparison between Christ and Adam. Imagine someone said “William Lane Craig is the Batman of apologetics.” You would immediately realize this is simply a compliment to Craig and does not require a belief in a literal Batman to work. What if a child who really did believe in Batman said it? It would still just be a compliment.
Those are fairly easy hurdles. The tough one is explaining my (yours, Dr H) view of Romans.
Shaun
on February 2, 2014 at 8:21 am
There must be a literal beginning to sin. However, why is almighty God obligated to give us an exact, blow by blow account of the event? If God chose to tell us about the fall through a fully, slightly, or even non historical story the message about our condition is still the same.
azlanta
on September 14, 2015 at 12:23 pm
Luke 11:50–51, Jesus says:
“That the blood of all the prophets, which was shed from the foundation of the world, may be required of this generation; From the blood of Abel to the blood of Zacharias … ”.
If Adam and Eve are non-historical then Abel’s blood was metaphorically shed as opposed to that of Zacharias?
Don’t think so.
MSH
on September 21, 2015 at 10:01 pm
Not sure why you’re posting this; I didn’t stake out a position. But in any event, those who wouldn’t want a historical Adam would simply say this is an expression (like “from Dan to Beersheba”) that draws on the biblical *story*. It wouldn’t be hard for them.
Jim Putney
on September 22, 2015 at 7:05 am
Understood Michael.
It is just very difficult, I think, to maintain that Christ, who knows the historicity / non-historicity of the events, would specifically name Abel a prophet of God and require of that generation the guilt for his murder if in fact that crime and its victim never actually existed.
It may be easy for some to say that this is so, but mightn’t such a view undermine the faith of others who are honestly trying to weigh the reliability of Christ’s own pronouncements.
Just satin’, If something is going to be tortured shtouldn’t we choose to subject our exegetical methodology to the torture rather than words of Christ?
MSH
on September 22, 2015 at 8:34 am
It could have that effect on others, but the incarnation by definition was an accommodation, and that bucket is where this would go. It doesn’t require that Jesus didn’t know XYZ. It just requires the notion that Jesus accommodated/condescended to the ignorance of his audience, or to the beliefs of his audience.
Jim Putney
on September 22, 2015 at 8:58 am
I understand the accommodation idea, i am trying to determine what it’s limitations are.
This case would seem to transcend the limitations of such a view because Jesus is declaring a specific judgement of guilt for a set of specific acts, including the murder of Abel. In pronouncing this judgement isn’t it legitimate only if the facts of the matter are true? Or am I missing something? Probably, since I often am.
MSH
on September 22, 2015 at 9:54 am
I’m only guessing at how the view would respond, but I supposed that, since Adam = Israel in such a position, then the point of Jesus’s statement would be that “you guys have killed most every prophet raised up among the people of God” or “People who don’t really follow Yahweh have been killing people who speak for him since the beginning.” I don’t see how we’d conclude that wasn’t factually correct, especially parsing it the second way.
Jesus did use hyperbole as well, but that’s another issue (though related).
Jim Putney
on September 22, 2015 at 10:09 am
Thanks Michael,
I see their point in attempting to retain the theological message while accommodating their view (pun intended).
Can’t say I’m comfortable with that level of parsing, but thank you for the clarification. 🙂 | we are all gendered halves. If this is true, Genesis 2 is not making claims about biological origins of humanity, and therefore the Bible should not be viewed as offering competing claims against science about human origins. If this is true, Adam and Eve also may or may not be the first humans or the parents of the entire human race. Such an archetypal focus is theologically viable and is well-represented in the ancient Near East.
There’s a lot to like here (at least for me as an ancient Near East guy). It allows for historical figures. The portrayal of those figures presumes the intent of the author was to present an archetype and not be claiming the origin of the entire human race.
1. The difficulty for many would be that the view says that the traditional understanding (Adam is presented as the first of all humans) is a misreading. This view says that it is wrong to interpret Genesis 2-3 as literal history OR as science. And yet Adam and Eve did exist. The intent of the account under inspiration was not to convey literal history in terms of detail and chronology, but neither was it to invent two characters to make a point. They did exist, but what we learn about them here must be filtered through the archetypal intent of the material.
If this view is correct, it wouldn’t be the first time a powerful tradition has erred in its reading of the Bible. I don’t care much about tradition, so this doesn’t bother me.
2. For me, this view is easily married to the Adam as Israel view. I’m not saying John would like the marriage, but it’s do-able. Adam is an “archetype” for Israel as well as humans. It’s easy.
3. Walton’s view helps us realize that the issue must be broken down into smaller questions, each one of which must be worked through to see the options:
Adam and Eve existed. Were they the original human pair, or an original pair in the focus of the biblical writer. (Were they original only for the sake of the story, or for sake of its inspired representative strategy?)
Does the Adam and Eve story aim to tell us about science? The history of humankind? Or something else? | yes |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://newrepublic.com/article/116858/bryan-college-forces-its-faculty-swear-historical-existence | Bryan College forces its faculty to swear to the historical existence ... | A Tennessee College is Forcing its Faculty to Swear They Believe Adam and Eve Existed
Shutterstock
Things are in ferment at Bryan College in Dayton, Tennessee. Named after William Jennings Bryan, one of the prosecution attorneys of the 1925 Scopes Trial (which also took place in Dayton), Bryan is an extremely conservative Christian school that adheres to Biblical literalism.
Until now. The press of science is beginning to discomfit even literalists, and is making incursions into Bryan.
The most recent scientific finding that’s causing Christian ferment is the calculation by evolutionary geneticists that the smallest size the population of humans could have experienced when it spread from Africa throughout the world was about 2250 individuals. That comes from back-calculating the minimum size of a human group that could have given rise to the extensive genetic diversity present today in non-African humans. Further, that figure is based on conservative assumptions and is very likely to be an underestimate.
2250 is, of course, not 2. That means that humanity could never have had just two ancestors within the time frame accepted by Biblical literalists. In other words, Adam and Eve did not exist—at least not in the way the Bible says. And that has huge repercussions for Christianity, for if Adam and Eve weren’t the literal parents of humanity, how did their Original Sin spread to us all? Original Sin is, of course, a pivotal part of most Christian doctrine, for without it there is no reason for Jesus to return and exculpate humanity from sin through his death and Resurrection. If Adam and Eve didn’t exist, but were simply a fiction, then Jesus died for a fiction.
More liberal Evangelicals have responded by engaging in various species of special pleading, including assuming that Adam and Eve were merely the “federal heads” of humanity: two individuals among many who were designated by God to represent everyone else. That, of course, fails to explain how Original Sin started and spread.
More liberal theologians simply claim that the Adam and Eve story is a metaphor for our inborn “selfish” nature: our genetic endowment that leads us act for ourselves rather than others. But that then makes animals of many species the bearers of Original Sin as well, and doesn’t explain how Jesus’s return helps us fight the tyranny of our “selfish genes”.
To a rationalist, all of these Sophisticated Theological™ gymnastics are amusing, and you can read such desperate apologetics at the website of BioLogos, an organization devoted to reconciling Jesus and Darwin. But the special pleading won’t convince anybody who isn’t wedded to the Christian mythos at the outset.
The least devious Christians (and that includes those at the Vatican, which professes the historicity of Adam and Eve) simply hold fast to literalism. Regardless of what genetics tells us, they say, the Bible takes precedence, and Adam and Eve were real historical figures from which we all descend.
And that, as reported by the Times Free Press of Chattanooga, Tennesee, is the new position of Bryan College, whose trustees have just added a rider to its “statement of belief” to expand that belief to a historical Adam and Eve. Since the original statement of belief is, like the Bible, inerrant and unchangeable, the new language is said to be a “clarification.”
And this clarification must, like the original statement of belief, be signed by all professors at the school. Here are the old and new statements from the newspaper:
What prompted this “clarification” is simply the advance of science, which shows that the Bible is flat wrong about Adam and Eve. Not every conservative Christian can comfortably ignore the new results from genetics (rejecting science smacks uncomfortably of being a backwoods hick-ishness), and some at Bryan have tentatively tried to find ways around the historicity of Adam and Eve. Several biology professors, for instance, are teaching a multiplicity of views about creation, a strategy that angers many of their coreligionists:
In 2010, Ken Ham, a nationally known creationist who runs the Creation Museum in Petersburg, Ky., wrote a scathing article criticizing Bryan College because of a graduate’s book. The graduate, Rachel Held Evans, wrote about how she had questioned the nuances of her evangelical upbringing and had come to new realizations about the world, including the belief that evolution was part of God’s creation plan.
Ham also criticized biology professor Brian Eisenback, who was quoted in USA Today saying that he taught all origin views and theories — including Genesis and evolution — without revealing his own beliefs.
“There are many colleges/seminaries — like Bryan College — across the nation with professors who compromise God’s word in Genesis and/or will not teach the authority of God’s Word in Genesis as they should. It’s about time that these colleges were held accountable for allowing such undermining of the authority of Scripture to the coming generation,” Ham wrote in a 2010 blog post.
Eisenback and Bible professor Ken Turner gained attention last year for their grant from the BioLogos Foundation to write a new curriculum on science education that will marry scientific evidence with evangelical Christian perspectives on interpreting Scripture and science. BioLogos is a nonprofit that believes God created the world over billions of years and works to further the ideas of evolutionary creationism.
BioLogos is the organization founded by Francis Collins, current head of the U.S. National Institutes of Health; its aim was to get evangelical Christians to accept science, including evolution. But since then it’s gone down a path that I can only describe as cowardly, refusing to take an official position on the historicity of Adam and Eve. But all the genetic evidence militates against that historicity, and it’s ironic that Collins is a geneticist (he no longer heads BioLogos). Nevertheless, BioLogos actively debates the “meaning” of Adam and Eve, and, by taking a grant from that organization to meld science and religion, the Bryan professors only exacerbate the tension that exists between these fields.
Last month, a chapel talk at Bryan featured a discussion with Wood and well-known evolutionary creationist Darrel Falk. At the end of their conversation, Livesay said he wanted to make a statement about Bryan College’s stance on origins. He said he did not agree with the views of BioLogos.
“Scripture always rises above anything else. Scripture rises above science. … Science at some point will catch up with the scripture,” Livesay said, according to an online podcast of the event.
Haynes, the trustee, said Livesay has brought up the need for clarification several times to the board. Christians have increasingly begun to question traditional interpretations of Genesis, though he believes the Bible is clear on the matter.
“When you review these things, the first thing you must do is go back to the scripture and make sure what you’re saying is compatible with scripture,” he said. “Scripture judges you.”
In the meantime, the students are also conflicted, for not all of them want to be seen as rejecting science wholesale:
Nearly 300 of the school’s 800 students signed a petition within a few days asking the trustees to reconsider the change. Joseph Murphy, in a Student Government Association letter to the administration, said the decision was made without faculty input and that the president and trustees were threatening academic freedom. He called the move unjust, uncharitable and unscriptural.
“We believe that this sets a precedent of fear and distrust in our community,” the petition read. “We believe that this will discourage potential faculty and staff from serving at Bryan and potential students from coming here.”
Remember, though, that what the students are objecting to is simply the new rider about Adam and Eve. They apparently don’t have any quarrel with the equally ludicrous claims about the creation of humans and Original Sin. As usual, you can pick and choose which statements of the Bible can be read as metaphor, and you don’t need good reasons. Remember as well that at nonreligious universities, professors are not required to sign any statement of belief—in evolution or any other proposition.
The Times Free Press notes that statements of faith are not uncommon in religious colleges, and lists some.
Our local Christian college, Wheaton College, which has a good reputation for academic rigor in other areas, also has a statement of faith that completely undercuts any notion of that university’s objective search for truth. Unsurprisingly, their statement strongly resembles the Nicene Creed.
Bryan is fighting a losing battle, but it will be a long battle. These vestiges of superstition, and of blind adherence to it, will eventually disappear as America becomes more secular. There will always be Biblical literalism, but I’m confident it will slowly wane. But it will wane not with the changing of minds, but over the corpses of its adherents, as the older generation dies off and the younger, exposed to secularism and doubt on the internet, begins to ask questions. (It’s telling that it the students of Bryan College are the biggest protestors.) I am patient, for I know this change won’t happen in my lifetime. But I also know that in one or two centuries, Adam and Eve will be regarded as we now regard Zeus and Wotan.
There are still those who engage in the futile battle to change the minds of literalist Christians. BioLogos tried and failed, and is now fighting a rearguard action that involves not promoting science, but soothing the ire of creationists. The National Center for Science Education, which has been highly effective in fighting public-school creationism in the courts, is still trying to reassure evangelicals that their faith is compatible with evolution:
“The position they’re staking out with this new statement is not shared among all evangelicals, all Christians,” said Josh Rosenau, programs and policy director at the National Center for Science Education, which advocates teaching of evolution and climate science. “The evangelical position doesn’t have to be an outright rejection of human evolution. There are ways to be a Bible-believing literalist without being at odds with science.”
Well, yes, of course some evangelicals are friendly to evolution, though 46% of all Americans (not just evangelicals!) are young-earth creationists. But to tell literalist evangelicals that they can simply make their faith compatible with evolution simply isn’t on, for it misunderstands Biblical literalism, the tenacity of faith, and especially the role of Adam and Eve as buttresses reinforcing the whole edifice of western Christianity. Such accommodationism tries to force Christianity into the Procrustean bed of science, and it just won’t fit.
To claim that Bible-believing literalism is compatible with science is like saying that eating broccoli is compatible with being a lion. It’s not only silly, but it’s also a theological statement—something that science-promoting organizations shouldn’t be making. Literalism is literalism, and Bryan College is fighting to keep it. | In other words, Adam and Eve did not exist—at least not in the way the Bible says. And that has huge repercussions for Christianity, for if Adam and Eve weren’t the literal parents of humanity, how did their Original Sin spread to us all? Original Sin is, of course, a pivotal part of most Christian doctrine, for without it there is no reason for Jesus to return and exculpate humanity from sin through his death and Resurrection. If Adam and Eve didn’t exist, but were simply a fiction, then Jesus died for a fiction.
More liberal Evangelicals have responded by engaging in various species of special pleading, including assuming that Adam and Eve were merely the “federal heads” of humanity: two individuals among many who were designated by God to represent everyone else. That, of course, fails to explain how Original Sin started and spread.
More liberal theologians simply claim that the Adam and Eve story is a metaphor for our inborn “selfish” nature: our genetic endowment that leads us act for ourselves rather than others. But that then makes animals of many species the bearers of Original Sin as well, and doesn’t explain how Jesus’s return helps us fight the tyranny of our “selfish genes”.
To a rationalist, all of these Sophisticated Theological™ gymnastics are amusing, and you can read such desperate apologetics at the website of BioLogos, an organization devoted to reconciling Jesus and Darwin. But the special pleading won’t convince anybody who isn’t wedded to the Christian mythos at the outset.
The least devious Christians (and that includes those at the Vatican, which professes the historicity of Adam and Eve) simply hold fast to literalism. Regardless of what genetics tells us, they say, the Bible takes precedence, and Adam and Eve were real historical figures from which we all descend.
| no |
Creationism | Were Adam and Eve real historical figures? | yes_statement | "adam" and eve were "real" "historical" "figures".. "adam" and eve existed as "real" "historical" "figures". | https://www.ldsdiscussions.com/adam | The Book of Mormon and Adam and Eve | The Book of Mormon and Adam and Eve Historicity
The story of Adam and Eve is one of the most famous of the Bible, and is told in both of the creation stories in Genesis (Chapter 1 and 2) to explain the beginning of life on Earth. In the traditional view, Adam and Eve were created by God in about 4,000 BCE, and are the original ancestors to every man, woman, and child on Earth today.
Everyone reading this almost certainly already knows the story, but this event crucial to the idea of the “fall of man” which necessitates the atonement of Christ within Christianity. The story of Adam and Eve is also important for explaining the eventual population of the world, which will be important throughout the Bible as lineage plays a role in both the Bible and the Book of Mormon.
The Adam and Eve story is considered by all secular scholars and even many biblical scholars to be an etiological myth written to explain the origin of humans, which can be further illustrated by looking at advancements in science, history, and the text of the Bible itself.
Within Mormonism, the Adam and Eve story must be a literal event with a real, historical Adam and Eve due to the extensive use and expansion of Adam and Eve in the Book of Mormon, Book of Abraham, and the Doctrine and Covenants. Not only is the Adam and Eve story well known to the writer of the Book of Mormon, but Joseph Smith cites Adam as the “Ancient of days” spoken of in the Bible in multiple revelations, which almost every biblical scholar believes without hesitation is actually referring to God, not Adam.
Furthermore, Joseph Smith claims to see Adam in a vision (D&C 137:5), Joseph F. Smith claims to see Adam and Eve in another vision (D&C 138:38-39), and Joseph Smith claims to reveal the location of where Adam and Eve lived after expulsion from the Garden of Eden: Adam-ondi-Ahman in Missouri, where the early Saints had settled in 1838. All of these events need a historical Adam to bolster Joseph Smith's claim as a prophet, but if Adam and Eve was indeed a etiological myth, the entire truth claims of Mormonism become mythical along with it.
Problems with Adam and Eve Historicity
As summarized above, the Adam and Eve story cannot be a literal, historical event given what we know today about evolution, genetics, DNA, and biblical scholarship. All secular scholars and many biblical scholars believe that Adam and Eve's account is a mythical story created to explain the origins of humans and to serve as an introduction to the Bible. The traditional view of the Bible dates when the Adam and Eve story would have occurred to about 4,000 BCE, but most biblical scholars now agree that the Pentateuch was not compiled until much later. As the documentary hypothesis has evolved the dates have moved a bit, with many scholars now believing that Genesis was not compiled until the 6th or, more likely, 5th century BCE. (Davies, G.I, "Introduction to the Pentateuch")
But the Adam and Eve story can be shown to not be historical from a few different methods that I would like to quickly summarize here:
Evolution: The study of the evolution of humans (Homo sapiens) shows that modern day humans likely began to evolve in Africa about 315,000 years ago. However, our human evolution likely began alongside of apelike species millions of years ago, which is confirmed through DNA along with looking at the changes in fossils over time. For example, humans today share 99% of our DNA with chimpanzees and bonobos. (Science Magazine)
While many like to discount the idea of evolution by calling it a theory, it has been proven through scientific methods over the years, and as technology and our knowledge has increased, the picture has only gotten more detailed – the overreaching conclusions have not changed. (Wikipedia Overview of Evolution Timeline)
Fossil Records: Looking at fossil records, as early as 3.5 million years ago there is evidence that one hominim species, Au. Afarensis, was already walking on two limbs. Since that time, fossils show continuous changes to the skeletons of human beings until we arrive to humans in more modern times.
This is intended to just be a quick overview, but I highly recommend reading overviews on the evolution of humans to understand that there is zero evidence for the idea that a man and women simply began in our current human state with the ability to speak in an Adamic language. In fact, there is an abundance of evidence against this idea, which can be easily understood by anyone interested in learning more by reading some basic overviews on human evolution. While a lot of apologists for the church imply that these issues are too complicated for people who aren’t experts to understand, I assure you that you are more than capable of understanding the implications of evolution and fossils on the Adam and Eve story. (Brittanica Overview of Fossil Records)
DNA: Along with the evidence that humans have evolved over millions of years to the Homo sapiens we are today, we can look at DNA evidence to give us a wealth of information about our pasts. While the Bible’s Adam and Eve story in Genesis is traditionally believed to have happened about 6,000 years ago, a quick DNA test through 23 and Me will show that our ancestors can be traced back tens of thousands of years earlier.
For example, in my own 23 and Me DNA results, I have less than 2% of Neanderthal DNA. These Neanderthals are dated to have disappeared 40,000 years ago, meaning that my DNA can be traced back to at least 34,000 years prior to the Adam and Eve story occurring. Not only do I show Neanderthal DNA, but 23 and Me also gives me traits I inherited from that Neanderthal DNA. In my case, I am told that I have:
A worse sense of direction
Difficulty discarding rarely-used possessions.
Less likely to have a fear of heights.
A better sprinter than distance runner.
All four of those traits area absolutely correct - the first two (and strongest matches in variants) have been told to me repeatedly throughout my life. I have a horrible sense of direction and I have always loved to collect things, and have a hard time getting rid of stuff that I haven't touched or looked at in years.
I highly recommend anyone interested in DNA and genealogy to check out a DNA service such as 23 and Me because the results are fascinating and you learn so much about your ancestry along with the uses of DNA to explain your heritage. As you can see above, eve the Neanderthal traits are incredible to learn, and again are great proof of the power of DNA to understand our history and what makes us who we are.
Looking at DNA in more general terms, scientists can now date us closer to a genetic “Adam and Eve,” but even that creates more problems for the Adam and Eve story. First, the "genetic Adam and Eve" still dates to about 135,000 years ago. The bigger problem, however, is that the “ancient "Adam" and ancient "Eve" probably didn't even live near each other, let alone mate.” (Live Science)
Along with fossil evidence showing that humans likely evolved from Africa, DNA from human fossils also confirms that the earliest humans likely left Africa at least 200,000 years ago. (Live Science) What is really cool about these studies is that DNA is confirming the results found from studies in archaeology, which an indicator that the two methods are on the right track.
Just like with evolution, while many seek to downplay the importance and accuracy of DNA (and we’ll get to that in more detail with the Book of Mormon and DNA), the bottom line is that it has been proven more conclusive every day. As geneticist Jamie Hanis Handy noted in a great podcast on the church’s DNA and the Book of Mormon essay (Mormon Stories Podcast), the advances in DNA are only making the picture sharper as if going from a 1 megapixel camera twenty years ago to today’s 100 megapixel cameras. The DNA evidence is a problem for Adam and Eve historicity that is only getting stronger with time – it is a field that has strong enough consensus that it is used every day in courts, science, and research to understand both who we are and where we came from.
Ancient Origin Myths: The Epic of Gilgamesh has many parallels to the Bible, most notably the global flood myth which will be covered in the next section. But it should be noted that there are also parallels to the Adam and Eve story. In the Epic of Gilgamesh, there is the story of Enkidu and Shamhat. “In both, a man is created from the soil by a god, and lives in a natural setting amongst the animals. He is introduced to a woman who tempts him. In both stories the man accepts food from the woman, covers his nakedness, and must leave his former realm, unable to return. The presence of a snake that steals a plant of immortality from the hero later in the epic is another point of similarity.” (Epic of Gilgamesh Parallels to the Bible)
Biblical scholarship: One of the clearest ways to understand that the Adam and Eve story is not historical and is a late addition to the Bible is by looking at the Bible itself. The creation story that details the Garden of Eden is the Jahwish/Yahwist source (J source) in Genesis 2, which was originally believed to have been written in the 7th century BCE, but is now understood to have been written between the 6th and 5th century BCE due to reasons we will state below. (Baden, Joel S. J, E, and the redaction of the Pentateuch) There are two ways this is important to understanding how the Adam and Eve story was understood as an origin myth to the writers of Genesis.
The first is to note that the Adam and Eve account includes many elements of a fable including a talking snake, a man that is created out of dust, a woman created by putting the man in a deep sleep to take a rib, living to the age of 930 (Genesis 5:5), and the name Adam simply meaning “man” in Hebrew. This is a more simplistic way to look at the story of Adam and Eve, but you can see these elements of a fable throughout the story, which would indicate that the story was not meant to be taken anciently as literal history, but as an origin story to help explain where we came from.
The second, and more important, reason that we can tell the Adam and Eve story is not an ancient, historical event is by looking at how early prophets spoke of Adam and Eve. Historian John Hamer noted that if you look at the Bible you can see where the story of Adam and Eve becomes known to the ancient prophets and writers. This is crucial not just for understanding the dating of the Adam and Eve story, but how people in Old Testament times understood the Adam and Eve story.
If you look at the early writers of the Old Testament (Isaiah, Jeremiah, Amos, Micah, etc) and look for references to Adam and Eve, there are zero mentions of the story in their writings. By contrast, there are 116 mentions of Moses and 100 mentions of David, showing that these writers were aware of many early stories of the Bible yet had no knowledge of the Adam and Eve story.
Outside of Genesis, there is just one mention of Adam and Eve in Deuteronomy (part of the Pentateuch that was compiled around the same time), and then one mention in the later books of Job (6th century BCE) and Chronicles (4th century BCE). That no earlier prophets mention such a vital event is a clear indicator that the Adam and Eve story appears late and was unknown to the Old Testament writers, and the fact that is barely gets mentioned once it is known would highlight that early writers did not believe it was literal history, but mythical.
In other words, the story of Adam and Eve today is considered one of the most important events in our history as it necessitates the atonement and is central to the Book of Mormon, yet it was unknown to early prophets. (Infants on Thrones podcast: Seven Deadly Heresies - The Adam and Eve conversation is about 56 minutes in) As John Hamer notes in his conclusion, “not only does it not work with this robust theory of evolution, all of the entire fossil record and everything else (genetics, DNA, and all the things that completely track with [those theories]), but the story itself within the Bible itself doesn’t portray itself as the most ancient thing even though as the Bible is now edited it comes first.”
The story of Adam and Eve cannot be reconciled with the vast evidence we have whether it’s genetics, DNA, archaeology, or biblical scholarship to be a literal, historical event. Many religions have found ways to work with the Adam and Eve story being an etiological myth, but as we will see, this becomes much trickier within Mormonism because of the extensive use and expansion of the story as a literal, historical event.
Problems with the Book of Mormon and Adam and Eve
As we mentioned in the summary, the Adam and Eve story is only mentioned in the OId Testament twice beyond the first five books – in the books of Job and Chronicles. The problem is that the Adam and Eve story was not known to the earlier writers such as Isaiah, Jeremiah, etc, and as such was not included in their writings even as mentions of David and Moses numbered over 100. Put another way, if the Adam and Eve story was historical, why would such a monumental event effectively disappear after it happened while other figures from the Old Testament are mentioned repeatedly throughout?
In the New Testament, mentions of Adam and Eve become more numerous which makes sense given that the Pentateuch was well known by this time. Skipping references to Adam that establish lineage, which is the case in Luke and Jude, we want to note that most of the references to Adam in the New Testament are establishing some early Christian theology which is evident from Paul’s epistles.
A few of Paul’s references to Adam are as follows:
Romans 5:19 For as by one man's disobedience many were made sinners, so by the obedience of one shall many be made righteous.
1 Corinthians 11:8 Indeed, man was not made from woman, but woman from man.
1 Corinthians 15:22 For as in Adam all die, even so in Christ shall all be made alive.
1 Corinthians 15:45 And so it is written, The first man Adam was made a living soul; the last Adam was made a quickening spirit.
However in the Book of Mormon, there are 26 mentions/references of Adam along with 28 mentions in the Doctrine and Covenants. This is problematic in that the teachings of the church not only insist on a literal Adam and Eve, but Joseph Smith even expands on the story in the Book of Mormon which is then written back into Joseph Smith’s revision of Genesis which later becomes the Book of Moses.
The Book of Mormon states that Nephi killed Laban for the brass plates around 600-592 BC, which puts it around the time that the Pentateuch was likely compiled according to the documentary hypothesis. While it is entirely anachronistic to have brass plates with the Bible engraved on it that were written in Egyptian, the timing makes it possible that they could have heard of Adam and Eve by this point even if it is impossible that those books would have been on metal plates by that time.
But the bigger problem is that the Book of Ether, which would have originated around 2200 BCE, speaks of Adam. From Ether 1:
3 And as I suppose that the first part of this record, which speaks concerning the creation of the world, and also of Adam, and an account from that time even to the great tower, and whatsoever things transpired among the children of men until that time, is had among the Jews—
4 Therefore I do not write those things which transpired from the days of Adam until that time; but they are had upon the plates; and whoso findeth them, the same will have power that he may get the full account.
As we will note throughout these sections on biblical scholarship and the Book of Mormon, the writers of the Book of Mormon are aware of ideas that would never have been known in the time the events would have taken place. While we often hear of the anachronistic items/animals in the Book of Mormon such as horses, steel, chariots, etc, the more problematic anachronisms come from the writer of the Book of Mormon having a 19th century worldview of the Bible before some of the material was even written or any form of Christianity had actually developed yet.
But the problem goes beyond the Book of Mormon, as the Book of Abraham references Adam twice in chapter 1 and then retells the Yahwist/Jahwist (J source) version of the Garden of Eden story in chapter 5, implementing Joseph Smith’s change in theology from a single God to a plurality of gods. This again treats Adam and Eve as a literal history, but again the Book of Abraham would have been written before 1650 BCE, which is a thousand years before the Adam and Eve story was known.
Beyond the scriptures, Joseph Smith himself claimed to see Adam in a vision at the Kirtland temple. In D&C 137, Joseph Smith records a vision where he states “I saw father Adam, and Abraham and Michael.” This vision is problematic not just for its use of of a literal Adam, but because Joseph Smith would later declare that Adam and Michael are one and the same, yet he claims to see both.
Following Joseph Smith’s vision, future prophet Joseph F. Smith also claims to see Adam and Eve in another vision (D&C 138:38-39):
“38 Among the great and mighty ones who were assembled in this vast congregation of the righteous were Father Adam, the Ancient of Days and father of all.”
Again, this vision is a problem for biblical scholarship beyond just Adam’s historicity, which we will cover below.
Last, the church refers to a spot in Missouri as Adam-ondi-Ahman, which is where Joseph Smith claimed Adam and Eve went after being expelled from the Garden of Eden. This has a number of problems even if we believe Adam and Eve were a literal event, but if it was not a historical event, how could Joseph Smith receive a revelation that Adam and Eve happened to live in the very spot that the early Saints were settling in Missouri at the time?
Complicating the problems even further is that this revelation, D&C 116, incorrectly labels Adam as the “Ancient of Days,” which is likely where the Adam-God doctrine originated. D&C 116 is a short entry, but states:
Spring Hill is named by the Lord Adam-ondi-Ahman, because, said he, it is the place where Adam shall come to visit his people, or the Ancient of Days shall sit, as spoken of by Daniel the prophet.”
The problem is that the “Ancient of Days” referred to in Daniel is God – not Adam. This is pretty clear when reading Daniel 7 in context:
9 I beheld till the thrones were cast down, and the Ancient of days did sit, whose garment was white as snow, and the hair of his head like the pure wool: his throne was like the fiery flame, and his wheels as burning fire.
10 A fiery stream issued and came forth from before him: thousand thousands ministered unto him, and ten thousand times ten thousand stood before him: the judgment was set, and the books were opened.
This is an even bigger problem than the Book of Mormon because Joseph Smith is not only relying on a literal Adam character against the evidence, but he is placing Adam into the Bible via revelation from God that is simply incorrect. Perhaps even more problematic is that Joseph Smith never makes the connection of Adam being the “Ancient of Days” until Sidney Rigdon proclaims it in the May 1834 Evening and Morning Star when he writes:
In the 24 chapter of Isaiah, and 23 verse, the prophet, after having described one of the greatest desolations ever pronounced on the head of any generation of men, says, "Then the moon shall be confounded, and the sun ashamed, when the Lord of hosts shall reign in mount Zion, and in Jerusalem, and before his ancients gloriously."We have before seen that this reign was to last a thousand years; and his ancients, before whom he was to reign in mount Zion, and in Jerusalem, gloriously, were all the redeemed from among men, of every tongue, language, kindred, and people.
According to Daniel, he (Jesus) was to come to the ancient of days: here he is said to reign before his ancients, that is, all the saints from our father Adam, down; for who could the ancient of days be but our father Adam? surely none other: he was the first who lived in days, and must be the ancient of days. And to whom would the Savior come, but to the father of all the race, and then receive his kingdom, in which he was to reign before, or with his ancients gloriously? Let it here be remarked, that it is said to be in mount Zion, and in Jerusalem, where the Lord is to reign before his ancients gloriously."
This is a misreading of the Bible which is why there is almost no non-LDS scholar that would entertain the idea of Adam being the “Ancient of Days,” as that would put Adam in a higher position than Jesus (in Daniel the Son of God goes to the Ancient of Days, indicating that the Ancient of Days would be above the Son of God), which would indicate that Adam is indeed part of the Godhead as would be taught by Brigham Young in the temple.
Keep in mind that Joseph Smith never made this connection during the production of the Book of Mormon nor did he make this connection when translating the Bible as he revised Daniel. Yet after Sidney Rigdon makes this claim in May 1834, Joseph Smith begins teaching the same idea soon after.
The vision referenced above was in January 1836, but Joseph Smith actually added this concept into a revelation when changing it prior to the Doctrine and Covenants being released in 1835. In D&C 27, Joseph Smith makes vast changes to the original revelation which was about only drinking wine with sacrament that was “made new among you.” In these changes, Joseph Smith seeks to create a line of authority from Adam down to Joseph Smith along with the keys of the priesthood. Within these changes is the following text in the voice of God: “And also with Michael, or Adam, the father of all, the prince of all, the ancient of days.”
Here Joseph Smith is changing an 1830 revelation to include his new theology in 1835 that was developed after Sidney Rigdon taught it to him. Joseph Smith also learned of the idea of a Melchizedek priesthood through Sidney Rigdon, which was also added into prior revelations in the same manner.
What we’re seeing here is Joseph Smith expanding on the Adam and Eve story in a literal way that simply goes against all evidence that we discussed above, but also retrofitting these ideas in 1835 that were clearly not developed or thought of in 1830. That is a problem, along with Joseph Smith claiming to see both Adam and Michael in the 1836 vision when he proclaims in 1835 they are the same person.
At the end of the day, the theology of Joseph Smith gets very messy once you look at how it progressed and changed as he learned from both the people and ideas around him. This is very apparent when looking at the creation and evolution of Joseph Smith’s First Vision and the priesthood restoration has a lot of similarities with changing revelations to backdate theological changes.
The Adam and Eve story simply cannot be a literal, historical event, however, which makes Joseph Smith’s claims in the voice of God highly problematic. Furthermore, Joseph’s declaration that Adam is the Ancient of Days and “father of all” is what eventually becomes the Adam-God doctrine, which is a very problematic teaching from prophet Brigham Young that Adam is our God:
“Now hear it, O inhabitants of the earth, Jew and Gentile, Saint and sinner! When our father Adam came into the garden of Eden, he came into it with a celestial body, and brought Eve, one of his wives, with him. He helped to make and organize this world. He is MICHAEL, the Archangel, the ANCIENT OF DAYS! about whom holy men have written and spoken—He is our FATHER and our GOD, and the only God with whom WE have to do. Every man upon the earth, professing Christians or non-professing, must hear it, and will know it sooner or later. They came here, organized the raw material, and arranged in their order the herbs of the field, the trees, the apple, the peach, the plum, the pear, and every other fruit that is desirable and good for man; the seed was brought from another sphere, and planted in this earth. The thistle, the thorn, the brier, and the obnoxious weed did not appear until after the earth was cursed. When Adam and Eve had eaten of the forbidden fruit, their bodies became mortal from its effects, and therefore their offspring were mortal.” (Brigham Young April 1852 General Conference, Journal of Discourses 1:50-51)
Church leaders today disavow the Adam-God doctrine, but it was also taught in the temple as divine truth and the last paragraph of the ‘Lecture at the Veil’ illustrates the point that these ideas originated with Joseph Smith’s teachings and revelations (note: while we do not post current temple scripts or pictures, because this has been disavowed by the church it is not considered sacred by the church today and I feel is fair to post in order to show how these ideas originated):
“Father Adam's oldest son, Jesus the Savior, who is the heir of the family, is Father Adam's first begotten in the spirit world and the only begotten according to the flesh (as it is written), Adam in his divinity having gone back into the spirit world and come in the spirit to Mary, and she conceived. For when Adam and Eve got through with their work in this earth, they did not lay their bodies down in the dust but returned to the spirit world, from whence they came.” (Journal of L. John Nuttall, secretary to Brigham Young, 7 February 1877)
There is not a single area of Mormonism that does not require a literal Adam whether we’re looking at the Book of Mormon, Book of Abraham, Book of Moses, Doctrine and Covenants, or even the endowment ceremony. If Adam and Eve are not historical figures but instead are etiological myths as the evidence points to, then every teaching that relies on a literal Adam becomes non-historical along with it.
Apologetic Response to the Adam and Eve Problems in Mormonism
The church teaches that Adam and Eve are literal, historical characters who were the first man and woman about 6,000 years ago. Apostle Bruce R. McConkie makes this clear in a 1981 speech to BYU:
The fall of Adam and the atonement of Christ are linked together—inseparably, everlastingly, never to be parted. They are as much a part of the same body as are the head and the heart, and each plays its part in the eternal scheme of things. The fall of Adam brought temporal and spiritual death into the world, and the atonement of Christ ransomed men from these two deaths by bringing to pass the immortality and eternal life of man. This makes the fall as essential a part of the plan of salvation as the very atonement itself.
In our increasingly secular society, it is as uncommon as it is unfashionable to speak of Adam and Eve or the Garden of Eden or of a “fortunate fall” into mortality. Nevertheless, the simple truth is that we cannot fully comprehend the Atonement and Resurrection of Christ and we will not adequately appreciate the unique purpose of His birth or His death—in other words, there is no way to truly celebrate Christmas or Easter—without understanding that there was an actual Adam and Eve who fell from an actual Eden, with all the consequences that fall carried with it.
I do not know the details of what happened on this planet before that, but I do know these two were created under the divine hand of God, that for a time they lived alone in a paradisiacal setting where there was neither human death nor future family, and that through a sequence of choices they transgressed a commandment of God which required that they leave their garden setting but which allowed them to have children before facing physical death.
These statements become an irreconcilable problem if Adam is not a literal character, which is evident by the areas we highlighted above such as archaeology, DNA, genetics/evolution, and biblical scholarship. There is simply no evidence to point to a literal Adam and Eve from a secular point of view, and the biblical scholarship makes clear that the story was not known to early prophets until it was added when the Pentateuch was compiled between 600-500BCE.
The apologetics for belief in a literal Adam and Eve extend beyond Mormonism, but we will focus on Mormonism because of the literal necessity that is created between the Book of Mormon and Doctrine and Covenants as covered above.
FAIR Mormon concedes that “the Church consistently insists that there is a historical Adam,” which is certainly true, but then they offer that members can take a metaphorical approach as well. From FAIR:
“Beyond the existence of a historical Adam, the rest of it can be understood literally or metaphorically, or more commonly as a mixture of these extreme positions”
The problem here is that where FAIR cites metaphorical use of the Adam and Eve story is the rib of Adam being used to create woman. They cite Spencer W. Kimball stating that “Modern prophets have taught that the creation of woman from the rib of the man is to be taken figuratively.”
FAIR takes this quote to say that members can take both a literal and metaphorical view, and that “as we find the approach that resonates with our own understanding and our own spiritual witness, I think that as long as we try to answer the question of what the scriptures are trying to teach us, we will do reasonably well. It is only when we try to assert something through the text that was never intended that we run into trouble.”
The problem here is the last sentence, because Joseph Smith is absolutely asserting that the Adam and Eve story as a literal, historical event when it was likely never meant to be received that way by the Israelites in their day. They did not have the same concept of history as we do in ancient times, but in this case Joseph Smith took the Adam and Eve story from the Bible and expanded it both in the Book of Mormon and his revelations as well, and that is where, as FAIR states, he ‘ran into trouble.’
FAIR then maintains that the “issue of the first man” is a “flexible” concept, and that “Part of the LDS view of Adam comes from this historical figure as a historical figure. But part of the LDS view comes from the ways in which Adam is just like ourselves - and often this comparison, intended by the text, is presented as metaphor.”
Again, the text is not presented as a metaphor within the Book of Mormon or Doctrine and Covenants. If we are to say the rib is a metaphor, then surely the talking serpent is well. And if you then concede those two elements are metaphorical, surely you can see the problem with insisting that Adam, who we are told lived to 930 years old and whose name means “man” in Hebrew, is hard to contend is literal.
FAIR then gives a brief list of ideas that “might be more essential than others and non-negotiable when working out evolution.” I’m summarizing here, but their list includes:
• Adam and Eve being literal historical people
• Adam being the first in a line of priesthood-holding patriarchs
• Adam's "fall" being what started the around 6000 years of the earth's temporal existence
• Adam and Eve being the first of God's spirit children
• The perfection of the God(s) that made us.
We’ve covered this above, but Adam and Eve cannot be historical people given the reasons we outlined above, which also directly contradicts the idea of a 6,000 temporal existence of the Earth. We can point to DNA tests that can go back 40-60,000 years without any interruptions – even if we accept the possibility that there were pre-Adamites that roamed the Earth, there is still zero evidence through DNA, migrations, genetics, etc that there was any change 6,000 years ago that led to us all being born through Adam and Eve as the pre-Adamites simply went extinct.
Remember that the need in Mormonism for a 6,000 year old temporal existence comes via revelation from God to Joseph Smith. From D&C 77:
6 Q[uestion]. What are we to understand by the book which John saw,[1] which was sealed on the back with seven seals?
A[nswer]. We are to understand that it contains the revealed will, mysteries, and the works of God; the hidden things of his economy concerning this earth during the seven thousand years of its continuance, or its temporal existence.
7 Q. What are we to understand by the seven seals with which it was sealed?
A. We are to understand that the first seal contains the things of the first thousand years, and the second also of the second thousand years, and so on until the seventh.
As FAIR noted above, it was when Joseph Smith tried to “assert something through the text that was never intended that we run into trouble,” which can be seen in the expansion of Adam as a literal, historical figure. Furthermore, if Adam is not a historical person, then Adam would not be the first priesthood holder nor would he be the first of God’s spirit children – those are both developments that are made after insisting on a literal Adam and Eve story.
Joseph Smith’s assertions also lead him to proclaim that the place Adam and Eve lived just happened to be in the same spot in Missouri that the Saints settled in, which is simply impossible given what we know about the origins of humans. Humans began in Africa, and there is simply no evidence in any way to suggest that human life began in America, let alone Missouri.
This problem gets even more complicated when we look into the fact that Joseph Smith proclaimed that Adam was the “Ancient of Days” only after Sidney Rigdon wrote about it in 1834, which leads to the Adam-God doctrine in the church as Daniel 7 is clearly referring to God as the “Ancient of Days.”
Regarding the “Ancient of Days” issue regarding D&C 116, FAIR does have a write-up although their response is more geared towards the problem that Joseph Smith creates by stating that the “Ancient of Days” is Adam which leads to the Adam-God doctrine, famously taught throughout Brigham Young’s life - even being included in the temple ceremony.
FAIR begins their response by stating “The real question should be how does one justify their interpretation of Ancient of Days in Daniel as only God.”
The author of the response then cites one non-LDS scholar who contends that the phrase “Ancient of Days… in reference to God...is unprecedented in the Hebrew texts."
If this were the case, why did Joseph Smith not make any note or change when revising Daniel in the Joseph Smith Translation of the Bible? Furthermore, if you read Daniel 7 in context as we highlighted above, it is referring to God. Furthermore, the phrase “Ancient of Days” only appears in Daniel so the use of that phrase is unprecedented in the Bible as a whole.
We will cover this in many other subjects as you continue through these pages, but this is an area where apologists are defending an idea that simply goes against all evidence and consensus. If you do some quick Google searches about the “Ancient of Days,” they all will refer to God because if you read Daniel 7 it is quite obviously God that is being spoken of. In other words, the only way Adam is the "Ancient of Days" is if he is God, which was taught as revelation from God by Brigham Young, but has since been disavowed. And that's a massive problem that is often brushed away as Brigham "speaking as a man" when he clearly believed he was speaking as a prophet of God.
The problem for apologetic responses to the Adam and Eve problem is that Joseph Smith and other prophets have doubled and tripled down on a literal, historical Adam and Eve that there is simply no good way out. While FAIR contends you can hold both a historical and metaphorical Adam and Eve, the scriptures produced by Joseph Smith cannot be historical or true if Adam and Eve were not real, historical people.
Conclusion
The story of Adam and Eve story was written for the understanding of the people of the time (600-500 BCE), but the historical information about its creation was not known to Joseph Smith when he produced the scriptures of Mormonism. Because Joseph smith doubled down on the literal nature that was likely never intended and then expanded the Adam and Eve story within the Book of Mormon, Doctrine and Covenants, and temple ceremony, he created a very difficult problem for the truth claims of Mormonism.
The evidence is clear that Adam and Eve cannot be historical whether we look at it through the lens of DNA, archaeology, genetics, or biblical scholarship. This is an irreconcilable problem when Joseph Smith claimed revelations from God that built upon a literal Adam and Eve story, called Adam the “Ancient of Days” which almost every scholar cites definitively as God, and made Adam and Eve a central part of the temple ceremony,
The Jaredite plates speak of Adam at a time when no other living prophet was aware of the Adam and Eve story, which shows that the writer of the Book of Mormon was aware of events that would be anachronistic to the times of the Jaredites, and the revelation to Joseph Smith claiming the “Ancient of Days” was Adam shows a fundamental misreading of the Bible cited by Joseph that originated with the writings of Sidney Rigdon, not a revelation from God, just a year earlier in 1834.
While we focus so often on the more common problems with the truth claims of Mormonism, the issues with biblical scholarship are just as damning to the truth claims if not more so. Adam and Eve play such a central role in Mormonism due to Joseph Smith’s teachings and revelations that they are a vital part of the temple ceremony, leading to the story of Adam being our God being taught to every member as part of the “Lecture at the Veil” under the prophet Brigham Young.
A literal Adam and Eve story is necessary to the truth claims of Mormonism from the Book of Mormon, Abraham, Moses, and all through the Doctrine and Covenants. There is not a single aspect of Mormonism that does not rely on a historical Adam and Eve. As genetics, archaeology, and biblical scholarship all continue to confirm that there was not a literal Adam and Eve that lived 6,000 years ago, it creates a problem at the very foundation of Mormonism’s historical truth claims that the scriptures are historical and the revelations from God.
I know this is a difficult process and how crushing it is to learn a religion you were raised with or converted to is not true. But if the church is true, then you should be able to read through our materials without any fear. As Apostle James Talmage said, "The man who cannot listen to an argument which opposes his views either has a weak position or is a weak defender of it. No opinion that cannot stand discussion or criticism is worth holding." I don't think I could say it better myself, and I hope members will take it to heart and read these pages with an open mind. | Mormonism, the Adam and Eve story must be a literal event with a real, historical Adam and Eve due to the extensive use and expansion of Adam and Eve in the Book of Mormon, Book of Abraham, and the Doctrine and Covenants. Not only is the Adam and Eve story well known to the writer of the Book of Mormon, but Joseph Smith cites Adam as the “Ancient of days” spoken of in the Bible in multiple revelations, which almost every biblical scholar believes without hesitation is actually referring to God, not Adam.
Furthermore, Joseph Smith claims to see Adam in a vision (D&C 137:5), Joseph F. Smith claims to see Adam and Eve in another vision (D&C 138:38-39), and Joseph Smith claims to reveal the location of where Adam and Eve lived after expulsion from the Garden of Eden: Adam-ondi-Ahman in Missouri, where the early Saints had settled in 1838. All of these events need a historical Adam to bolster Joseph Smith's claim as a prophet, but if Adam and Eve was indeed a etiological myth, the entire truth claims of Mormonism become mythical along with it.
Problems with Adam and Eve Historicity
As summarized above, the Adam and Eve story cannot be a literal, historical event given what we know today about evolution, genetics, DNA, and biblical scholarship. All secular scholars and many biblical scholars believe that Adam and Eve's account is a mythical story created to explain the origins of humans and to serve as an introduction to the Bible. The traditional view of the Bible dates when the Adam and Eve story would have occurred to about 4,000 BCE, but most biblical scholars now agree that the Pentateuch was not compiled until much later. As the documentary hypothesis has evolved the dates have moved a bit, with many scholars now believing that Genesis was not compiled until the 6th or, more likely, 5th century BCE. | no |
Radio | Were radios used in the Trenches during World War I? | yes_statement | "radios" were used in the trenches during world war i.. the trenches during world war i used "radios". | https://blogs.mhs.ox.ac.uk/innovatingincombat/category/wireless-telegraph/page/2/ | Wireless Telegraph Archives - Page 2 of 2 - Innovating in Combat | Category Archives: Wireless Telegraph
Prior to the outbreak of WW1 in August 1914 many of the techniques to be used in later years for radio communications had already been invented, although most were still at an early stage of practical application. Radio transmitters were predominantly using spark discharge from a high-voltage induction coil. The transmitted signal was noisy and rich in harmonics and spread widely over the radio spectrum.
The ideal transmission was a continuous wave (CW) and there were three ways of producing these: by an HF alternator, a Poulsen arc generator or a valve oscillator. The first two of these were high-power generators and not suitable for battlefield communication. Valve oscillators were eventually universally adopted. Several important circuits using valves had been produced by 1914. Predominant amongst these were the amplifier, detector and oscillator. The oscillator, apart from its use as a CW generator for radio transmitters, was also used in radio receivers in a heterodyne circuit and the resulting beat note produced an audible tone of the Morse signal in the headphones.
Valves at this time were still at a primitive state of development. Those available were: The Fleming diode, the de Forest audion triode and the C and T triodes designed by the Marconi engineer Henry Round. All these triode valves were gas-filled to improve their sensitivity but had erratic performance.
Both the C and T valves were used in the Marconi Short Distance Wireless Telephone Transmitter and Receiver. This radio, however, would not have been robust enough for use under battlefield conditions. The C valve, however, was used by the army in direction finding stations.
Early army radios
At the start of the war the only radios available were a few 500-watt and 1500-watt spark transmitters and their crystal-detector receivers. The 500-watt pack sets were used with Cavalry Brigades and the 1500-watt wagon sets with Cavalry Divisional Headquarters and General Headquarters.
The principal method of communication by the British army, up to late 1917, was by cable for speech and Morse transmission. Initially, a single cable was laid above ground and the earth used as the return. However, the cable was vulnerable to damage by enemy fire and by the passage of tanks across the battlefield, a problem not solved even when buried cable was used. Very often communication was not possible, particularly when troops were moving rapidly forward or in retreat. During the course of the war tens of thousand miles of cable was laid and, at times, there was an acute shortage of replacement cable.
It was found that the Germans were able to listen in by picking up earth currents or tapping into the cable. This was not realized at first but, when discovered, it was necessary to limit the number and content of the message and, where possible, to use codes or encryption. A solution to this was the Fullerphone, which made the signal immune from eavesdropping, but it could not be used for telephony.
Other non-wireless means of sending messages that were used with mixed success during the war was by runners, dispatch riders, pigeons, lamps and flags.
Up until the end of March 1918 the Royal Flying Corp was part of the army and experimental work on aircraft radio communication was carried out at Brooklands and later at Biggin Hill. The development group, headed by Major Charles Prince, also worked on the development of CW voice transmission. This culminated in mid-1916 with the successful demonstration of ground-to-air speech communication. However, it was to be a further two years before suitable equipment was incorporated in aircraft. Consequently all the early radios were spark transmitters fitted in the aeroplanes and crystal receivers on the ground.
No 1 Aircraft Spark
Amongst the earliest radios to be used in aeroplanes was the 30‑watt, No 1 Aircraft Spark (Figure 1), powered from an accumulator. The set was designed in 1914 and fitted to approximately 600 aircraft during 1915. It was used for spotting enemy artillery and reporting back to ground by Morse code. There were several variants of the set and, altogether, nearly 4000 of these were manufactured.
Unlike with the RFC there was a general lack of enthusiasm in the army for using radios, particularly during the first two years of the war. There were several reasons for this: the equipment was bulky; the accumulators needed frequent re-charging; and there was a genuine fear that the enemy would be able to intercept the messages.
This situation was to change later in the war when radio had proved to be the only reliable way to communicate, particularly when troops were on the move.
The BF Trench Set
One of the earliest radios to be used in the trenches was the 50-watt Trench Set, also known as the BF Set (Figure 2) which was used for communication from Brigade to Division. This went into action in the Battle of Loos in September 1915 and in the first Battle of the Somme on July 1st 1916. The transmitting portion of set was based on the design of the No 1 Aircraft Spark set. The receiving portion used a carborundum crystal detector. It was powered from an accumulator and also required dry-cell batteries for biasing the carborundum crystal and an internal test buzzer.
Its range was 3.7km with aerials mounted on masts but this reduced to 1.1km when the aerial was run close to the ground.
Careful planning of frequencies was required in order to minimize interference from neighbouring spark transmitters, a problem much simplified when CW sets came into use.
The BF set was used extensively during the second half of the war and approximately 1200 were manufactured.
130-watt Wilson Trench Set & Short Wave Tuner Mk. III
The Wilson Transmitter was used primarily for Division to Corps communication and Corps Directing Station. This set came into service about the same time as the BF Trench Set. It had a fixed spark gap with a motor-driven, high-speed interrupter rather than the slower magnetic interrupter. The greater number of sparks produced a musical note in the headphones, making the Morse signal more easy to hear through interference. The transmitter had the same frequencies as the BF set and the higher power meant that the range was up to 8.3km. The set was supplied by an accumulator.
Tuner Short Wave Mk. III*
The Mk. III version of this tuner was introduced in 1916 and the Mk. III* in 1918. Its prime purpose was to receive Morse messages from aircraft flying over the trenches but it was also used with the Wilson Set. The receiver used a crystal detector and there was a buzzer for calibrating and testing the tuner. Total production was 766 transmitters and 6595 receivers.
Later valve Developments
Towards the end of 1915 an entirely new type of valve was developed under Colonel (later General) Gustav Ferrié who was in charge of the French Military Telegraphic Service. The construction was very simple: it had a straight tungsten filament, a spiral grid and a cylindrical anode. It was evacuated to a low pressure and during the manufacturing process the glass and metal parts were heated to a sufficient temperature to release occluded gases. The valve, known as the TM, was immensely successful and widely used throughout the war, over 100,000 of which were made by the two French companies, Fotos and Métal. By 1916 it was being manufactured in Britain as the R-valve.
There were many variants, including the Air Force C and D, and the low-power transmitting valves B, F and AT25. Two higher power transmitting valves, introduced in late 1917, were the T2A and T2B which had 250 watt dissipations. These were used by the RFC (later RAF) in ground station CW transmitters.
One problem with the TM and R-valve was the high capacitance between the anode and grid. This made its use as an RF amplifier very difficult because energy fed back from the output of the valve to its input was liable to cause unwanted oscillation.
To overcome this Round of the Marconi Company developed the type Q in 1916 which featured small size and low capacitance. It had a straight tungsten filament terminated by the two pointed metal caps at each bend of the bulb. Both the anode and grid connections were taken to two further caps near one end of the tubular glass bulb. The Q was primarily intended as a detector but it was also used as both an amplifier. It overall length was 73mm and the bulb diameter 16mm. Later in the war Round designed the V24 which was better suited as an RF or AF amplifier.
Later army radios
Valve radios first made their appearance in 1916. One of the earliest was the Tuner Aircraft Valve Mk. I but this was not made in significant numbers.
W/T Set Forward Spark 20 watt “B”
This set came into service in 1917 and was also known as ‘The Loop Set’ and was used for forward communication. There were both Rear Stations and Front Stations sets, with two versions of each. There were also separate receivers for the Rear and Forward Stations. These receivers had two valves which were either the French TM or the British R.
The transmitter had a fixed spark gap powered directly from an induction coil operating in a similar way to the BF Trench Set. The power for the stations was supplied by an accumulator and a 32-volt HT battery.
Approximately 4000 of the transmitters and receivers were manufactured.
W/T Trench Set Mk. I 30-watt
The first CW sets for field use were made in 1916 and used a single valve for both the transmitter and receiver circuits and was used for forward communication by ICW. The Mark 1* version came into service in 1917 and incorporated a high-speed interrupter to modulate the transmission.
W/T Set Trench CW Mk. III*
This (CW) set comprised a transmitter and a heterodyne receiver in separate boxes. It came into service in 1917 and was used for forward area telegraphy.
The transmitter was rated at 30-watts and had a range of 3.7km. It utilised two valves which were either the type B or AT25.
The receiver had two R valves. The first of these was used in a heterodyne circuit and the second as an audio frequency amplifier for the Morse signal.
The complete set also included a heterodyne wavemeter, Selector Unit and Rectifier Unit.
Total production was a little under 3000 for both the transmitter and the receiver and approximately 400 of the Selector Units.
Telephone Wireless Aircraft Mk. II
The Telephone Aircraft Mk. II came into service in 1917. It had two B or F valves, one being used for control and other an output valve. An accumulator was used to supply the valve’s filaments and the HT was derived from a wind-driven generator. It had a range of 3.2km to other aircraft and 2km to ground stations.
The aerial was a trailing wire of length 100–150ft with a lead weight at the end.
Earlier attempts to fit radio telephones in aircraft had been hampered by the high background noise from the aircraft’s engine. This problem was alleviated by the design of a helmet with built-in microphone and earphones to block much of the noise.
A typical receiver for use with this transmitter was the Tuner Aircraft Mk III which had three R valves, one for the detector and two for low-frequency amplification.
Conclusions
The army was very slow to adopt wireless for communication on the battlefield and relied too much on communication by cable. There was a genuine fear that wireless would be intercepted by the enemy but this was also true with cable. The cable used was being constantly severed by shell fire and the passage of tanks across the battlefield.
Apart from a few high-powered transmitters that played a minor role in the war, the first wireless transmitters were fitted in aircraft during 1915. These were used to communicate with crystal receivers on the ground to direct artillery fire.
The first trench sets went into service towards the end of 1915. From this time onward the army came slowly round to realizing that wireless communication was a more reliable way to communicate than by cable, particularly when troops were moving rapidly forwards or backwards.
The most significant technical breakthrough came following the development of the TM valve in France. This, and its many derivatives, enabled reliable valve transmitters and receivers to be produced from 1916 onwards. It now became possible to make CW transmitters, which were far superior to the spark sets.
By mid-1917 the army at last accepted that radios were the best way to communicate and increasing numbers of these came into service in the final year of the war.
Acknowledgements
I should like to thank Nick Kendall-Carpenter and his archive staff at the Royal Signal Museum, Blandford, Louis Meulstree and John Liffen of the Science Museum for their valuable assistance.
About the Author: Keith Thrower OBE is author of British Radio Valves: The Vintage Years – 1904-1925 and British Radio Valves: The Classic Years 1926-1946.
This article is based on the paper Keith gave at “Making Telecommunications in the First World War” in Oxford on 24 January 2014. See our events page for full details including the abstract, PowerPoint slides and full version of Keith’s paper.
Fig. 1 (left) Commercial version of Fleming diode; (right) BT-H version of the de Forest Audion triode, a ‘soft’ valve erratic in operation.
Fig. 2: Marconi-Round C and T valves of 1913. These were both ‘soft’ valves. The C was a receiver valve for use as a detector or RF amplifier. The T was a transmitter valve.
Fig 3: Marconi Short Distance Wireless Telephone Transmitter and Receiver. This set used a C valve in the receiver, connected as an RF amplifier with regenerative feedback to increase its gain and provide improved selectivity. Detection was by a carborundum crystal. For transmission there was a single T.N. valve (seen mounted in the frame) and this was connected as an oscillator. It is believed that Marconi used this set for CW voice trials in 1914.
Fig 9: Tuner Short Wave Mk. III*. This receiver has both a carborundum and a Perikon detector.
Fig. 10a & 10b: French TM and Osram F valve. The TM was a general-purpose valve used mainly as a detector or an AF amplifier. The F was a low-power transmitting valve similar in construction to the TM.
Fig. 10c & 10d: Top Q, bottom V24. The Q went into production at Edison Swan in 1916 and was used mainly as a detector. The V24 probably went into production at the end of 1917 or early in 1918. It was used as both an RF and an AF amplifier.
Second Lieutenant Basil Schonland R.E. Image available in the public domain.
No Corps of Signals existed in those days. Signalling was very much the province of the Royal Engineers and specifically its Telegraph Battalion and it was they who attempted to use wireless for the first time in a military conflict during the Boer War in South Africa. But it was not equal to the task and it was left to the Royal Navy to show the way. And show it they did during the blockade operation they were mounting in Delagoa Bay, Portuguese East Africa. Wireless proved itself at sea; it was still to do so on land.
In 1908 the Royal Engineer Signal Service came into being and it was this body of men, plus their horses, cable carts and much other paraphernalia of war that provided the British Army with its signalling capability during conflict that broke out in 1914.
By now wireless equipment suitable for use by soldiers and rugged enough to be hauled about on carts and on the backs of men was slowly becoming part of the Army’s inventory of equipment. And the officers and men were being trained to use it. Amongst that group was a young South African by the name of Basil Schonland. During the summer of 1915 he completed Part 1 of the Mathematical Tripos at Cambridge and immediately set his sights on serving his adopted country. Even whilst a schoolboy, and then an undergraduate in his home town of Grahamstown in South Africa’s Eastern Cape province, Schonland was a loyal subject of the King and, along with many of his fellow South Africans, he saw it as his duty to fight for King and Country.
Schonland was commissioned as a second lieutenant in August 1915 and immediately began training at the Signal Depot in Bletchley. In October he was given command of 43 Airline Section with 40 men, their horses and their cable carts and in January 1916 he led them into France where they joined the Fourth Army then being formed under Sir Henry Rawlinson.
It was the Battle of the Somme that saw wireless equipment pressed into service in earnest. Though hundreds of miles of telephone and telegraph cables had been laid only those buried at considerable depth had any hope of surviving the onslaught of almost incessant artillery barrages. Visual signalling by flag, heliograph and lamp was perilous in the extreme for the operator who raised himself mere inches above the parapet of a trench: wireless became almost obligatory. And Schonland, whose skills had already been noted, was soon to become a W/T officer in the Cavalry Corps. None was more enthusiastic.
Map showing the deployment of the wireless sets near the front line in September 1916. Image available in the public domain.
This new technology caught the imagination of a young man for whom science, and especially physics, was of almost overwhelming interest. He threw himself into mastering the wireless equipment and of passing on his knowledge to his men. The three trench sets with which Schonland became so familiar were the BF Set, the Wilson Set and the Loop Set. The ‘BF’ presumably meant “British Field” but to those who used it in earnest its eponymous letters had another meaning entirely! Like most of the equipment in use at that time the BF set had a spark transmitter and carborundum crystal detector. It radiated signals over a band of frequencies between about 540 and 860 kHz at a power of some 50 watts. The Wilson set was more powerful and used a more sophisticated method of generating its spark. The frequencies (or wavelengths in those days) that it covered were similar to the BF Set. Both were used extensively from within the trenches during First Battle of the Somme in September 1916.
In 1917 a new wireless set was introduced. Called the W/T Set Forward Spark 20 Watt B it soon became rather more familiar by the less wordy name of the Loop Set. The loop in question was its peculiar aerial (or antenna) which consisted of a square loop of brass tubing 1m per side that was mounted vertically on a bayonet stuck into ground. The Loop Set’s other great claim to fame was that it was extremely simple to use even for an inexperienced operator. Morse code was the mode of transmission and that skill was fundamental to all who served in the R.E. Signal Service, officers included. Of particular importance, especially to the technically-minded such as Schonland, was the much higher frequency on which the Loop Set worked. It could be tuned to transmit and receive between 3.8 and 4.6 MHz and was claimed to have an effective range of 2000 yards. And though the transmitter still used a spark, the receiver contained two thermionic valves – an astounding technological leap at that time.
By then Schonland had left the front line and was instructing at the GHQ Central Wireless School at Montreux where he was also promoted to lieutenant. It was there that he and another South African by the name of Spencer Humby conducted their own ‘researches into wireless’ which they published in a scientific journal soon after the end of the war. “The wavelengths radiated by oscillating valve circuits” became an important paper in the field of wireless communications that flowered in the 1920s.
But Schonland was not only a competent physicist; he also wielded an educated pen and his most lasting contribution to wireless communications during WW1 was his four-part series of articles published in 1919 in The Wireless World. They appeared under the title of this article and described the use of wireless in the trenches and were possibly the first such articles to tell how wireless was used during the war by the R.E. Signals Section. The Boy’s Own Paper had nothing on them for verve and excitement! Take this passage in which the young Schonland describes an attack during the battle of Arras in which a key hilltop position had been captured by the British Army. However, the enemy was re-grouping below and a counter-attack was imminent.
Owing, however, to the speed of their advance our troops were out of touch with the higher command, and the guns behind them. Out of touch, did I say? What is this queer mast affair some sappers are rigging up in the garden of what was once a pretty cottage? Up go the small steel masts in spite of the shells streaming into the village … The aerial up, it is not long before they have installed their tiny set in the cellar and are ‘through’. R9 signals each way. Just in time too, for the Boche at the foot of the hill shows signs of counter-attack. “Get at the guns, Sparks, get at the guns!”. And Sparks bends to his key …
By the war’s end Basil Schonland had been promoted captain and was in charge of all wireless communications of the British First Army. Under him he had thirty officers and more than 900 hundred men, along with over 300 wireless sets. And soon, after the end of hostilities, strenuous efforts were made to retain his services as Chief Instructor in Wireless in the British Army. But Schonland was intent on following a career as a scientist and he returned to Cambridge to work under Lord Rutherford at the famous Cavendish Laboratory. However he was not lost entirely to the colours for a mere twenty years later he was back in uniform and served throughout the second great conflict with distinction, ultimately as scientific adviser to Field Marshal Montgomery’s 21st Army Group.
About the author
Dr Brian Austin is a retired engineering academic from the University of Liverpool’s Department of Electrical Engineering and Electronics. Before that he spent some years on the academic staff of his alma mater, the University of the Witwatersrand in Johannesburg, South Africa. He also had a spell, a decade in fact, in industry where he led the team that developed an underground radio system for use in South Africa’s very deep gold mines.
He also has a great interest in the history of his subject and especially the military applications of radio and electronics. This has seen him publish a number of articles on topics from the first use of wireless in warfare during the Boer War (1899 – 1902) and South Africa’s wartime radar in WW2, to others dealing with the communications problems during the Battle of Arnhem and, most recently, on wireless in the trenches in WW1. He is also the author of the biography of Sir Basil Schonland, the South African pioneer in the study of lightning, scientific adviser to Field Marshall Mongomery’s 21 Army Group and director of the Atomic Energy Research Establishment at Harwell.
Ahead of next year’s centenary, Elizabeth Bruton and Graeme Gooday ask what were the different motivations of scientists, the military and industry in terms of World War One innovation and research – patriotism, profit, or both?
Should innovators profit from warfare? Is it reasonable instead to ask scientists and engineers to act from pure patriotism alone? As Scientists for Global Responsibility has recently voiced alarm about UK science’s reliance on military funding, it is revealing to look back to a time before science entered a Faustian pact with armed conflict.
Prior to World War One, Britain did not have a military-industrial complex in which scientists routinely participated with industry to facilitate ever more warfare. Even in the first year of the war, rather than safely researching in a laboratory, a brilliant scientist such as Henry Moseley could die at Gallipoli, shot by a sniper while serving as a signals engineer. Reflecting on such tales, we think we know about the Great War: the patriotism and sacrifice of those in the armed forces and the terrible and pointless loss of life – especially on the Western Front – throughout the four long years of war.
But numerous historians have recently rethought these stereotypes. How was it that the war continued for four years, with 16 million dying while millions more of pounds and dollars were spent on armaments and the routine expense of war? Who was manufacturing such weaponry and ammunition, and who developed the infrastructure of scientific research that helped to win the ‘Great War’? More importantly, what were their motives: patriotic altruism, private profit – or an uneasy mixture of both?
In light of the impending centenary of this global catastrophe, we find that patriotism was not always the sole or indeed the main rationale for industrial activity in wartime. Indeed, afterwards the financial rewards for war-winning innovation were treated somewhat differently to equivalent creative acts during peacetime.
When Britain entered the war on 4 August 1914 the Marconi Company, with evident patriotic fervour, offered its wireless operators and training to facilitate the armed services’ use of wireless communications. It did so without any initial upfront demand for payment. The Company also allowed government ‘censors’ to monitor all communications through their long-distance wireless stations. Suspicious communications were intercepted and passed onto code-breakers in the Admiralty’s secret ‘Room 40’. During the war, the Company apparently received no compensation or out-of-pocket expenses for this work: in summer 1915 Marconi’s General Manager complained that “not one penny-piece has yet been refunded to us.”
By now, it was clear that the German model of state investment in research could win wars more decisively than uncoordinated private industry, laissez-faire invention, and British heroism. Stung into action by German innovations in poison gas warfare and devastatingly effective interception of French and British telecommunications, in 1915 the UK government established its own national Department of Scientific Industrial Research (DSIR).
Supported initially by the ‘Million Fund’ – approximately £45 million today – the DSIR both hired scientists for laboratory research and encouraged private industrial firms to establish co-operative industrial research associations. Unlike the Marconi Company, however, many companies did not willingly offer their services to the state. This is evident from the 1915 extension to the Defence of the Realm Act (1914): now key British industries were compelled to prioritise government and military orders.
The production of armaments and industrial infrastructure was thereby raised to a level that, when combined with American input from 1917, could support a military force capable of winning the war. By then increased state support for science and industry was having a noticeable effect. For example, the aeroplane invented just over a decade previously was adapted dexterously to the purposes of aerial combat and the ‘tank’ changed the nature of battle when first introduced in France in 1916.
Soon after the so-called ‘Great War’ was concluded in November 1918, a Royal Commission on Awards to Inventors rewarded hundreds such wartime innovations. It eventually handed out £1.5 million (about £75 million today) in a Britain nearly bankrupted by the cost of conflict. The distribution indicates just how much the British establishment acknowledged national inventiveness, crediting tanks and aeroplanes as crucial to the recent victory. The Commission rejected claims about other inventions it deemed to lack genuine novelty or life-saving significance.
Telecommunications had been of great importance during wartime, especially when threatened by interception. The catastrophic interception of British and French forward communication by Germans early in the war resulted in the development and widespread deployment of an interception-proof alternative. This was the so-called Fullerphone, invented and patented by a serving military officer Captain Algernon Clement Fuller in 1916. When Fuller took his device to the Commission soon after the war ended, however he was offered much less than he requested: not only did his device rely heavily on the work of others, his patent rights would reap him further international rewards. Fuller perhaps took comfort from his post-war promotion eventually reaching the rank of Major-General.
A young Henry Moseley, taken in the Balliol-Trinity Laboratory, Oxford, c.1910. Source: Wikimedia Commons.
In contrast, the Marconi Company’s wartime contribution was more richly rewarded than that of Fuller. This was due in part to the eventual recognition of the Company’s important role in supporting the British government and the Admiralty. Not only had Marconi intercepted hostile communications, but its “direction finders” had tracked German navy and airships in the open sea.
Despite this, the Marconi Company entered into an extraordinary post-war dispute with the British government, demanding large rewards for its wartime contributions. Marconi’s lawyers actually accused the government of infringing the Company’s wireless patents: exploiting its intellectual property without due payment. So difficult did the discussions become on the six-figure royalty claims that the matter was devolved to a private adjudication. Although the final amount paid was never publicized, the Marconi Company was soon able to buy up telegraph companies to fulfil its long-held ambition to become a telecommunications giant – later known as Cable and Wireless.
So how then shall we commemorate Fuller and Marconi and indeed their industrial production teams for their wartime innovations? Were they like Moseley nobly donating their all to the cause, seeking only recompense to endure the hardships of war? Or to rephrase Clausewitz’s old dictum, was warfare for them just profit by other means…?
Content
All postings on this blog are provided “AS IS” with no warranties, and confer no rights. All entries in this blog are personal opinion and do not necessarily reflect the opinion of the Museum of the History of Science, the University of Leeds, or any other project partner. | There were several reasons for this: the equipment was bulky; the accumulators needed frequent re-charging; and there was a genuine fear that the enemy would be able to intercept the messages.
This situation was to change later in the war when radio had proved to be the only reliable way to communicate, particularly when troops were on the move.
The BF Trench Set
One of the earliest radios to be used in the trenches was the 50-watt Trench Set, also known as the BF Set (Figure 2) which was used for communication from Brigade to Division. This went into action in the Battle of Loos in September 1915 and in the first Battle of the Somme on July 1st 1916. The transmitting portion of set was based on the design of the No 1 Aircraft Spark set. The receiving portion used a carborundum crystal detector. It was powered from an accumulator and also required dry-cell batteries for biasing the carborundum crystal and an internal test buzzer.
Its range was 3.7km with aerials mounted on masts but this reduced to 1.1km when the aerial was run close to the ground.
Careful planning of frequencies was required in order to minimize interference from neighbouring spark transmitters, a problem much simplified when CW sets came into use.
The BF set was used extensively during the second half of the war and approximately 1200 were manufactured.
130-watt Wilson Trench Set & Short Wave Tuner Mk. III
The Wilson Transmitter was used primarily for Division to Corps communication and Corps Directing Station. This set came into service about the same time as the BF Trench Set. It had a fixed spark gap with a motor-driven, high-speed interrupter rather than the slower magnetic interrupter. The greater number of sparks produced a musical note in the headphones, making the Morse signal more easy to hear through interference. The transmitter had the same frequencies as the BF set and the higher power meant that the range was up to 8.3km. The set was supplied by an accumulator.
| yes |
Radio | Were radios used in the Trenches during World War I? | yes_statement | "radios" were used in the trenches during world war i.. the trenches during world war i used "radios". | https://blogs.scientificamerican.com/anecdotes-from-the-archive/command-control-communication-electricity-1917/ | Command, Control, Communication, Electricity, 1917 - Scientific ... | Command, Control, Communication, Electricity, 1917
Communications in 1917: during an attack, the signals squad climbed out of the trenches to follow the advancing troops, laying down networks of telephone and telegraph lines as they went. Credit: Scientific American Supplement, March 17, 1917
Advertisement
Armies in the First World War were vast compared with armies in preceding wars. The telephone, telegraph and the appearance of effective wireless radio sets were replacing older communication methods. The article in Scientific American from March 17, 1917, says:
“With an army strung out over miles of irregular trenches prompt communication by the older method is obviously impossible, although special instructions carried by fast motorcycles have been found greatly superior to the old horse-mounted messengers; but where rapid communication with the commanders of long lines of trenches, and numerous widely scattered batteries of guns, is necessary something vastly more prompt and certain is required, and in this emergency recourse is had to the telephone, which has proved to be indispensable. By means of telephones, operated through heavily insulated wires that can be run rapidly from point to point, resting directly upon the ground without any supports or elaborate fixtures, orders may be transmitted along miles of trenches within a few minutes, where flags could not be seen, and where messengers even on the fastest motorcycles would require hours, even if they got through safely at all, thus enabling rapid cooperative action to be taken, or special advances organized and properly supported both by troops and guns.”
A German field telephone setup—portable of course—from 1914. Credit: Scientific American, December 19, 1914
Artillery, though, frequently cut telephone wires, so “runners” carrying messages back and forth were widely used. As warfare became more mobile late in the war, static telephone lines became less useful, while radios were becoming more portable. (Carrier pigeons, it should be noted, remained useful until well after the Second World War.)
The views expressed are those of the author and are not necessarily those of Scientific American.
Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers. | Command, Control, Communication, Electricity, 1917
Communications in 1917: during an attack, the signals squad climbed out of the trenches to follow the advancing troops, laying down networks of telephone and telegraph lines as they went. Credit: Scientific American Supplement, March 17, 1917
Advertisement
Armies in the First World War were vast compared with armies in preceding wars. The telephone, telegraph and the appearance of effective wireless radio sets were replacing older communication methods. The article in Scientific American from March 17, 1917, says:
“With an army strung out over miles of irregular trenches prompt communication by the older method is obviously impossible, although special instructions carried by fast motorcycles have been found greatly superior to the old horse-mounted messengers; but where rapid communication with the commanders of long lines of trenches, and numerous widely scattered batteries of guns, is necessary something vastly more prompt and certain is required, and in this emergency recourse is had to the telephone, which has proved to be indispensable. By means of telephones, operated through heavily insulated wires that can be run rapidly from point to point, resting directly upon the ground without any supports or elaborate fixtures, orders may be transmitted along miles of trenches within a few minutes, where flags could not be seen, and where messengers even on the fastest motorcycles would require hours, even if they got through safely at all, thus enabling rapid cooperative action to be taken, or special advances organized and properly supported both by troops and guns.”
A German field telephone setup—portable of course—from 1914. Credit: Scientific American, December 19, 1914
Artillery, though, frequently cut telephone wires, so “runners” carrying messages back and forth were widely used. As warfare became more mobile late in the war, static telephone lines became less useful, while radios were becoming more portable. (Carrier pigeons, it should be noted, remained useful until well after the Second World War.)
The views expressed are those of the author and are not necessarily those of Scientific American.
Scientific American is part of Springer Nature, | yes |
Radio | Were radios used in the Trenches during World War I? | yes_statement | "radios" were used in the trenches during world war i.. the trenches during world war i used "radios". | https://www.nationalarchives.gov.uk/first-world-war/telecommunications-in-war/ | Fighting talk: First World War telecommunications - The National ... | Fighting talk: First World War telecommunications
Group of five women in Women's Auxiliary Army Corps (WAAC) uniform working as linemen, with a GPO supervisor c.1917 (BT Archives cat ref: TCE 361/ARC 3005)
As the First World War raged, governments harnessed modern technologies to give them an advantage in conflict. New inventions – from tanks to Zeppelins – appeared on the battlefield, while existing technologies were adapted to fit the needs of the British war effort.
As a result of the need to exchange information faster and more efficiently, telecommunications advanced rapidly in this time. The Engineering Department of the General Post Office, a government body that became British Telecom (BT), played a major role in innovation in telecommunications, as well as supplying the British military and civilians with ways of communicating.
The First World War famously saw the creation of the Royal Air Force. Air warfare demanded further evolution in telecommunications: keeping pilots updated while in the air with intelligence-gathering and decision-making was crucial to operational success. Images from a 1918 handbook of airborne communication equipment indicate the way in which telecommunications evolved in line with military technology.
Image from a 1918 handbook with examples of aircraft telecommunications headwear (cat ref: AIR 10/100)
In particular, the development of the throat microphone was a significant advancement as it allowed pilots to use aircraft telephones without their hands. Captain B S Cohen’s October 1919 report into aircraft telephones refers to some of the engineering work carried out to develop aircraft telephones such as the ‘hands free’ kit.
The Engineering Department was not only important for the Western Front; it had a crucial role in keeping Britain as safe as possible. Rigid Inflatable Airships – also known as ‘Zeppelins’ after Count Ferdinand von Zeppelin, the German pioneer of the airship – were a source of widespread fear in mainland Britain; the German army and navy used them as bombers and scouts.
The Engineering Department provided key equipment to intercept and report the wireless signals that enemy aircraft, including Zeppelins, often used to navigate. This was done at the Department’s ‘Direction Finding Stations’ at Peterborough, Seaham Harbour, Westgate and Falkirk in Scotland. When the location of enemy aircraft was identified, the information was wired to the Intelligence Department of the War Office. Sir William Slingo’s report includes a map indicating the routes of Zeppelins that took part in the raid of 2 and 3 May 1917.
Map indicating the routes of Zeppelins that took part in the 2 and 3 May raid, 1917 (BT Archives cat ref: POST 30/4304A/183)
Cutting off communications altogether was another way to disrupt an enemy attack. A map from the Ministry of Munitions file indicates, in red, where communications would be suspended in times of an emergency – that is, if the German army successfully landed in Britain.
Under emergency protocols, all communications would be suspended in Ireland and on the majority of Britain’s eastern coastline – from Dundee to Dover – if a German invasion was successful. In the event of invasion, responsibility for communications would transfer from the Engineering Department to the Army Signal Service.
Major T F Purves, commissioned officer in the Royal Engineers, worked with Post Office engineers to oversee the provision of over 200 items of special telecommunications apparatus. These were adapted to fit the needs of British soldiers in the trenches and ranged from modified cavalry field radios to field communication devices for gun spotters.
This improved telecommunications equipment made it easier for troops and officers to get information up and down the chain of command; from forces headquarters to the front line and back. One important piece of apparatus for sending intelligence and operational updates was the portable morse code machine, used by the British army throughout the conflict and often in trench holes at the heart of the battle.
During this period, work was also being done to improve the military’s ability to attack and defend. One of these techniques was sound ranging, a process that uses sound to work out the position and coordinates of enemy artillery firing – you can see the devices used to conduct experiments into sound ranging below in fascinating photographs from the Ministry of Aviation. Within a report held by BT Archives covering the Engineering Department’s work during the First World War, a letter written by General Douglas Haig, Commanding-in-Chief, British Armies in France, to Joseph Pease, Postmaster General, thanks officers for the great assistance provided in connection with sound ranging.
Thanks for the courtesy shown and the assistance rendered, which has been of material advantage towards furthering military operations.
Sound ranging equipment (cat ref: AVIA 7/2768)
Records belonging to the Geographical Section of the General Staff, a department of the War Office, also reveal some wonderful illustrative diagrams. The diagram below, detailing instructions on how to determine the location of gun fire, demonstrates the complexity of this procedure prior to the development of sound ranging equipment.
Diagram from the Geographical Section of the General Staff (cat ref: WO 297/73)
The first cross-channel cable in the English Channel was put in place in 1850 and by the turn of the 20th century it was filled with a criss-cross of cables going north-south as well as west-east. The Engineering Department administered and staffed the ships that laid and maintained the cables required for trans-channel telecommunications. According to Sir William Slingo’s report, when war between the United Kingdom and Germany seemed imminent, ‘the cables connecting England and Germany were disconnected… but on receipt of remonstrance from Germany, communication was temporarily restored’. The General Post Office cable ship ‘Alert’ manned by Engineering Department staff duly cut the cable again when war was announced officially just hours later.
The initiative of laying cable was not without risk. The cable ship ‘Monarch’ was leased to the Grand Fleet of the Royal Navy and based at Scapa Flow. It struck a mine on 8 September 1915 while repairing the cable between Beachy Head and Havre, and sunk almost immediately. Three men died.
There are several references to the bravery of the men who staffed the cable ships. Letters from the records of the Treasury highlight the responsibilities that were placed upon these men: in some cases, men of a relatively junior rank had to take on the role of a much higher rank, for example Commander or Chief Officers, for a reasonable period of time. Operations could be dangerous, with the risk of hostile action. With such undertakings, it was deemed fair to provide men with ‘substitution payments’ for their work.
The female workforce were also taking on new responsibilities. As a great number of the Engineering Department’s workforce enlisted in the army, temporary workers were drafted in to replace them – thousands of whom were women. Roles for women prior to the war were predominantly secretarial, but in these settings, a wider variety of tasks – such as working a switchboard – were made possible.
As men and women began to work alongside one another during the war, comparisons between their work was more easily identifiable: documentation shown here states that people favoured the female ‘night telephonist’ over her male counterpart. Documents like these contributed to quashing more traditional notions around women’s abilities to do certain tasks, like voting – for which they were previously deemed ‘too emotional’.
The Female Night Telephonist is a quicker and more accurate worker than the man; that she is more deft and skilful in manipulation, more assiduous in attention to signals, and the she responds more quickly and efficiently to exceptional demands at times of pressure
At first glance, this graph from Sir William Slingo’s report appears to suggest that there was a point where more women were being employed by the Engineering department than men. However, a closer inspection of the axes reveals that the numbers of male staff were in the thousands, while women were in the hundreds. It remains interesting that the number of women employed steadily rose throughout the war.
The story and legacy of the Engineering Department’s work lives on in material held at BT Archives in High Holborn, and in the vast array of First World War material held at The National Archives in Kew. | Major T F Purves, commissioned officer in the Royal Engineers, worked with Post Office engineers to oversee the provision of over 200 items of special telecommunications apparatus. These were adapted to fit the needs of British soldiers in the trenches and ranged from modified cavalry field radios to field communication devices for gun spotters.
This improved telecommunications equipment made it easier for troops and officers to get information up and down the chain of command; from forces headquarters to the front line and back. One important piece of apparatus for sending intelligence and operational updates was the portable morse code machine, used by the British army throughout the conflict and often in trench holes at the heart of the battle.
During this period, work was also being done to improve the military’s ability to attack and defend. One of these techniques was sound ranging, a process that uses sound to work out the position and coordinates of enemy artillery firing – you can see the devices used to conduct experiments into sound ranging below in fascinating photographs from the Ministry of Aviation. Within a report held by BT Archives covering the Engineering Department’s work during the First World War, a letter written by General Douglas Haig, Commanding-in-Chief, British Armies in France, to Joseph Pease, Postmaster General, thanks officers for the great assistance provided in connection with sound ranging.
Thanks for the courtesy shown and the assistance rendered, which has been of material advantage towards furthering military operations.
Sound ranging equipment (cat ref: AVIA 7/2768)
Records belonging to the Geographical Section of the General Staff, a department of the War Office, also reveal some wonderful illustrative diagrams. The diagram below, detailing instructions on how to determine the location of gun fire, demonstrates the complexity of this procedure prior to the development of sound ranging equipment.
Diagram from the Geographical Section of the General Staff (cat ref: WO 297/73)
| yes |
Radio | Were radios used in the Trenches during World War I? | yes_statement | "radios" were used in the trenches during world war i.. the trenches during world war i used "radios". | https://www.warmuseum.ca/firstworldwar/history/battles-and-fighting/tactics-and-logistics-on-land/communication/ | Tactics and Logistics on Land | Canada and the First World War | Communication
Communication between the rear areas and front line units, as well as laterally along the front line, was always difficult, and often led to failures in battle.
Vulnerable Communication on the Ground
On the ground, signalers used a variety of devices and methods to send messages. Telephones were reliable, but their long, strung-out wires were vulnerable to shellfire and frequently cut. Burying the lines deep into the ground was labour intensive and time consuming, and still did not always protect against shellfire. Pigeons were surprisingly effective in carrying messages, but required special handlers and could become disoriented by the noise of artillery barrages.
Bulky and Fragile Radios
Wireless telegraphy (radio) could transmit Morse code in 1914, but the wireless sets were bulky and fragile, with relatively short ranges. Later, more robust wireless sets were used by observers to direct artillery fire.
Limited Communication with Aircraft
Aircraft flew “contact patrols” to observe the forward movement of troops. They were only fitted with transmitting wireless sets, as receivers added too much weight to the airframe. Aircraft could also drop messages near a headquarters or friendly position. Troops on the ground used various methods, such as signal lamps, panels, and flares, to send messages to aircraft.
Unresolved Problems
Despite these tools, communication often broke down between the attacking infantry and their headquarters in the rear. When this happened, commanders did not know the location of their troops and were unable to support them with accurate artillery fire, ammunition, or supplies. This failure in communication was never fully solved during the war. | Communication
Communication between the rear areas and front line units, as well as laterally along the front line, was always difficult, and often led to failures in battle.
Vulnerable Communication on the Ground
On the ground, signalers used a variety of devices and methods to send messages. Telephones were reliable, but their long, strung-out wires were vulnerable to shellfire and frequently cut. Burying the lines deep into the ground was labour intensive and time consuming, and still did not always protect against shellfire. Pigeons were surprisingly effective in carrying messages, but required special handlers and could become disoriented by the noise of artillery barrages.
Bulky and Fragile Radios
Wireless telegraphy (radio) could transmit Morse code in 1914, but the wireless sets were bulky and fragile, with relatively short ranges. Later, more robust wireless sets were used by observers to direct artillery fire.
Limited Communication with Aircraft
Aircraft flew “contact patrols” to observe the forward movement of troops. They were only fitted with transmitting wireless sets, as receivers added too much weight to the airframe. Aircraft could also drop messages near a headquarters or friendly position. Troops on the ground used various methods, such as signal lamps, panels, and flares, to send messages to aircraft.
Unresolved Problems
Despite these tools, communication often broke down between the attacking infantry and their headquarters in the rear. When this happened, commanders did not know the location of their troops and were unable to support them with accurate artillery fire, ammunition, or supplies. This failure in communication was never fully solved during the war. | yes |
Radio | Were radios used in the Trenches during World War I? | yes_statement | "radios" were used in the trenches during world war i.. the trenches during world war i used "radios". | https://www.britannica.com/technology/military-communication/From-World-War-I-to-1940 | Military communication - WWI, 1940, Technology | Britannica | The onset of World War I found the opposing armies equipped to a varying degree with modern means of signal communication but with little appreciation of the enormous load that signal systems must carry to maintain control of the huge forces that were set in motion. The organization and efficiency of the armies varied greatly. At one end of the scale was Great Britain, with a small but highly developed signal service; and at the other end stood Russia, with a signal service inferior to that of the Union Army at the close of the American Civil War. The fact that commanders could not control, coordinate, and direct huge modern armies without efficient signal communication quickly became apparent to both the Allies and the Central Powers. The Germans, despite years of concentration on the Schlieffen Plan, failed to provide adequately for communication between higher headquarters and the rapidly marching armies of the right wing driving through Belgium and northern France. This resulted in a lack of coordination between these armies, which caused a miscarriage of the plan, a forced halt in the German advance, and the subsequent withdrawal north of the Marne. On the Allied side, the debacle of the Russian forces in East Prussia—a crushing defeat at the hands of General Paul von Hindenburg in the Battle of Tannenberg—was largely due to an almost total lack of signal communication.
As the war progressed there was a growing appreciation of the need for improved electrical communications of much greater capacity for the larger units and of the need within regiments for electrical communications, which had heretofore been regarded as unessential and impractical. Field telephones and switchboards were soon developed, and those already in existence were improved. An intricate system of telephone lines involving thousands of miles of wire soon appeared on each side. Pole lines with many crossarms and circuits came into being in the rear of the opposing armies, and buried cables and wires were laid in the elaborate trench systems leading to the forwardmost outposts. The main arteries running from the rear to the forward trenches were crossed by lateral cable routes roughly parallel to the front.
Thus, there grew an immense gridwork of deep buried cables, particularly on the German side and in the British sectors of the Allied side, with underground junction boxes and test points every few hundred yards. The French used deep buried cable to some extent but generally preferred to string their telephone lines on wooden supports set against the walls of deep open trenches. Thus electrical communication in the form of the telephone and telegraph gradually extended to the smaller units until front-line platoons were frequently kept in touch with their company headquarters through these mediums.
Despite efforts to protect the wire lines, they were frequently cut at critical times as the result of the intense artillery fire. This led all the belligerents to develop and use radio (wireless) as an alternate means of communication. Prewar radio sets were too heavy and bulky to be taken into the trenches, and they also required large and highly visible aerials. Radio engineers of the belligerent nations soon developed smaller and more portable sets powered by storage batteries and using low, inconspicuous aerials. Although radio equipment came to be issued to the headquarters of all units, including battalions, the ease of enemy interception, the requirements for cryptographing or encoding messages, and the inherent unreliability of these early systems caused them to be regarded as strictly auxiliary to the wire system and reserved for emergency use when the wire lines were cut. Visual signaling returned to the battlefield in World War I with the use of electric signal lamps. Pyrotechnics, rockets, Very pistols, and flares had a wide use for transmitting prearranged signals. Messenger service came to be highly developed, and motorcycle, bicycle, and automobile messenger service was employed. Homing pigeons were used extensively as one-way messengers from front to rear and acquitted themselves extremely well. Dogs were also used as messengers and, in the German army, reached a high degree of efficiency.
A new element in warfare, the airplane, introduced in World War I, immediately posed a problem in communication. During most of the war, communication between ground and air was difficult and elementary. To make his reports the pilot had to land or drop messages, and he received instructions while in the air from strips of white and black cloth called “panels” laid out in an open field according to prearranged designs. Extensive efforts were made to use radiotelegraph and radiotelephone between the airplanes and ground headquarters. The closing stages of the war saw many planes equipped with radio, but the service was never satisfactory or reliable and had little influence on military operations.
During World War I, wireless telegraph communication was employed extensively by the navies of the world and had a major influence on the character of naval warfare. High-powered shore and ship stations made wireless communication over long distances possible.
One of the war lessons learned by most of the major nations was the compelling need for scientific research and development of equipment and techniques for military purposes. Although the amount of funds devoted to military development during the period from World War I to World War II was relatively small, the modest expenditures served to establish a bond between industry, science, and the armed forces of the major nations.
Of great importance in postwar radio communication was the pioneering by amateurs and by industry and science in the use of very high frequencies. These developments opened up to the armed services the possibilities of portable short-range equipment for mobile and portable tactical use by armies, navies, and air forces. Military work in these fields was carried out actively in Germany, Great Britain, and the United States. As early as 1938 Germany had completed the design and manufacture of a complete line of portable and mobile radio equipment for its army and air force.
Between World Wars I and II the printing telegraph, commonly known as the teleprinter or teletypewriter machine, came into civilian use and was incorporated in military wire-communication systems, but military networks were not extensive. Before World War II, military radioteleprinter circuits were nonexistent.
Another major communication advance that had its origin and early growth during the period between World Wars I and II was frequency-modulated (FM) radio. Developed during the late 1920s and early 1930s by Edwin H. Armstrong, an inventor and a major in the U.S. Army Signal Corps during World War I, this new method of modulation offered heretofore unattainable reduction of the effect of ignition and other noises encountered in radios used in vehicles. It was first adapted for military use by the U.S. Army, which, prior to World War II, had under development tank, vehicular, and man-pack frequency-modulated radio transmitters and receivers.
On the eve of World War II, all nations employed generally similar methods for military signaling. The messenger systems included foot, mounted, motorcycle, automobile, airplane, homing pigeon, and the messenger dog. Visual agencies included flags, lights, panels for signaling airplanes, and pyrotechnics. The electrical agencies embraced wire systems providing telephone and telegraph service, including the printing telegraph. Both radiotelephony and radiotelegraphy were in wide use, but radiotelephony had not as yet proved reliable and satisfactory for tactical military communication. The navies of the world entered World War II with highly developed radio communication systems, both telegraph and telephone, and with development under way of many electronic navigational aids. Blinker-light signaling was still used. The use of telephone systems and loud-speaking voice amplifiers on naval vessels had also come into common use. Air forces employed wire and radio communication to link up their bases and landing fields and had developed airborne long-range, medium-range, and short-range radio equipment for air-to-ground and air-to-air communication. | Thus electrical communication in the form of the telephone and telegraph gradually extended to the smaller units until front-line platoons were frequently kept in touch with their company headquarters through these mediums.
Despite efforts to protect the wire lines, they were frequently cut at critical times as the result of the intense artillery fire. This led all the belligerents to develop and use radio (wireless) as an alternate means of communication. Prewar radio sets were too heavy and bulky to be taken into the trenches, and they also required large and highly visible aerials. Radio engineers of the belligerent nations soon developed smaller and more portable sets powered by storage batteries and using low, inconspicuous aerials. Although radio equipment came to be issued to the headquarters of all units, including battalions, the ease of enemy interception, the requirements for cryptographing or encoding messages, and the inherent unreliability of these early systems caused them to be regarded as strictly auxiliary to the wire system and reserved for emergency use when the wire lines were cut. Visual signaling returned to the battlefield in World War I with the use of electric signal lamps. Pyrotechnics, rockets, Very pistols, and flares had a wide use for transmitting prearranged signals. Messenger service came to be highly developed, and motorcycle, bicycle, and automobile messenger service was employed. Homing pigeons were used extensively as one-way messengers from front to rear and acquitted themselves extremely well. Dogs were also used as messengers and, in the German army, reached a high degree of efficiency.
A new element in warfare, the airplane, introduced in World War I, immediately posed a problem in communication. During most of the war, communication between ground and air was difficult and elementary. To make his reports the pilot had to land or drop messages, and he received instructions while in the air from strips of white and black cloth called “panels” laid out in an open field according to prearranged designs. Extensive efforts were made to use radiotelegraph and radiotelephone between the airplanes and ground headquarters. | yes |
Radio | Were radios used in the Trenches during World War I? | no_statement | "radios" were not used in the trenches during world war i.. the trenches during world war i did not use "radios". | https://history.army.mil/books/30-17/s_5.htm | Getting the Message Through-Chapter 5 | The United States managed to remain neutral in the European conflict
from August 1914 to April 1917. The nation had traditionally been isolated
and protected from Old World contests by its ocean moat, but such geographic
security could no longer be taken for granted when Germany's indiscriminate
use of submarine warfare violated the traditional rights of neutrals.
Americans' belief in an Allied victory had initially made the necessity
of preparations for war seem remote. But as the war in the west developed
into a bloody stalemate, the Allies' best efforts appeared able to guarantee
only more of the same. On the other hand, the dire prospect of a German
victory and the consequent disruption of the European balance of power
jeopardized U.S. national interests and spurred the call to arms.
Despite the clamor of the preparedness movement and the loss of American
lives at sea, President Woodrow Wilson moved cautiously from a policy
of strict neutrality to the adoption of a moralistic crusade "to
make the world safe for democracy." His insistence on neutrality
until nearly the eve of war, however, severely hampered preparedness
efforts by the War and Navy Departments. In his view, such activities
would not be "neutral." The Signal Corps, meanwhile, faced
the same difficulties as the rest of the Army in preparing its communicators
for duty overseas. But the Corps' problems were complicated by dissension
within its own ranks, the outcome of which would have a significant
impact on the branch's future.
Trouble in the Air
As the experiences in Mexico had clearly illustrated, all was not
well with the Signal Corps' Aviation Section. In fact, problems had
been brewing for several years. A series of investigations into the
section's activities from 1915 to 1917 revealed the growing tension
between those Corps members who flew and those who did not.1
When Col. David C. Shanks of the Inspector General's Department visited
the aviation school in San Diego to conduct the annual inspection in
January 1915, he made the unsettling discovery that, besides the hiring
of an aeronautical engineer, very little had been done in response to
the previous year's recommendations. In fact, a subsequent probe revealed
that Lt. Col. Samuel Reber, head of the Aviation Section, had suppressed
the critical report.2
Consequently, the
[165]
Army's chief of staff, Maj. Gen. Hugh L. Scott, appointed an investigating
board headed by the inspector general, Brig. Gen. Ernest A. Garlington,
to examine the administration of the Aviation Section. (Garlington had
commanded the unsuccessful attempt to rescue Lieutenant Greely and his
men from the Arctic in 1883.) About the same time, Senator Joseph T.
Robinson of Arkansas called for an investigation of the air service.
While the Senate passed his resolution on 16 March, the day after the
1st Aero Squadron arrived in Columbus, the House did not concur, and
the congressional initiative ended.3
As part of its investigation, the Garlington board inquired into allegations
made by Lt. Col. Lewis E. Goodier that improper disbursements of flight
pay had been made to Capt. Arthur S. Cowan, commanding officer of the
San Diego school, and some of his staff. These men were not, Goodier
alleged, qualified pilots.4
The board, after a month of taking testimony, determined that Goodier's
allegations were true. In the meantime, however, a subsequent investigation
by the Office of the Judge Advocate General had ruled that Cowan could
retain his aviator rating and the extra pay.5
The Garlington board also found that the officers assigned to monitor
contracts with private airplane manufacturers had been accepting substandard
machines. The board held Chief Signal Officer Scriven and Colonel Reber
responsible for allowing unsafe aircraft to be used and further criticized
Scriven for not adequately supervising the Aviation Section. Secretary
of War Newton D. Baker concurred with the board's findings and censured
Scriven and Reber for failing to enforce and maintain discipline and
neglecting to observe military regulations. Baker also announced his
intention to reorganize the Aviation Section.6
Scriven, for his part, accused aviation officers of insubordination
and disloyalty.7
In April 1916, at Baker's request, the General Staff began its own
investigation into the organization and administration of the Aviation
Section. Lt. Herbert A. Dargue, the officer in charge of training at
San Diego, aired his grievances before the committee and added his voice
to those calling for the removal of aviation from the Signal Corps.
He complained that the Signal Corps had no unit fully equipped for field
service and no radio set for airplanes. Speaking on behalf of most of
the aviation officers, Dargue stated their belief that the Signal Corps
lacked an officer capable of commanding the Aviation Section. The General
Staff's report, completed at the end of June, recommended that the Aviation
Section be completely separated from the Signal Corps.8
Secretary Baker reacted with caution to the increasingly bitter controversy.
Although he did not detach aviation from the Signal Corps, he did remove
Reber as chief of the Aviation Section on 5 May 1916 and temporarily
replaced him with Capt. William Mitchell. Reber's dismissal ended his
official aviation duties and also effectively finished his career as
a Signal Corps officer. He went overseas during World War I, but he
did not receive any Signal Corps-related assignments. After returning
from France, he retired in 1919 with thirty-seven years of military
service. As a private citizen, he embarked upon a successful second
career with the Radio Corporation of America where his Signal Corps
experience served him well.9
[166]
CAPTAIN ARNOLD
Baker selected Lt. Col. George O. Squier to succeed Reber as the head
of the Aviation Section upon the completion of Squier's tour as attaché
to London. As attaché, Squier had been able to observe European aviation
and had even conducted several secret missions to the front. His contacts
within the industrial and scientific communities as well as his long
association with Army aviation made him a good choice for the job. Upon
Squier's arrival in Washington, Captain Mitchell became his assistant.10
In his new position, Squier contended with pressure from outside as
well as inside the Army. The Aero Club of America, for example, criticized
the Signal Corps for failing to adequately promote aviation within the
National Guard, while the press, reacting to the misadventures in Mexico,
sharply castigated the entire program.11
Meanwhile, at San Diego Captain Cowan was relieved as commander of
the aviation school and replaced by Col. William A. Glassford, a signal
officer who had served in the Corps since 1874.12
Many of the staff and faculty also lost their jobs. Two pioneer aviators,
returning to aeronautical duty after completing other assignments, served
under Glassford: Capts. Frank P. Lahm and Henry H. Arnold. (While serving
as the school's supply officer, Arnold eventually overcame his fear
of flying that stemmed from his harrowing experience at Fort Riley several
years earlier.) But despite the change in administration, the troubles
at the school had not ended. Glassford, too, came under fire in January
1917 regarding his lack of vigor in searching for two pilots from the
school who had crashed in the Mexican desert. Fortunately they were
found by a civilian search party, alive but somewhat the worse for wear
after more than a week of wandering in the desert. Consequently, the
Inspector General's Department launched yet another inquiry, which recommended
that Glassford and several of his staff members be relieved. Glassford
retired on 11 April 1917, only five days after the declaration of war
with Germany.13
With Scriven's retirement in February 1917, Squier became the new
chief signal officer, and Lt. Col. John B. Bennet took over the duties
of the Aviation Section. In his last annual report as chief signal officer
Scriven remarked: "The plan of the General Staff, approved by the
Secretary of War, contemplates, and as I think very properly, the eventual
separation of the aviation service from the
[167]
Signal Corps. The separation of this service from any technical corps
should take place when the Air Service is capable of standing alone.
This time has not yet come."14
After all the squabbles of the past few years, the Signal Corps and
its Aviation Section headed toward war still tethered uneasily to each
other.
"Over Here ": Mobilization and Training
Germany's resumption of unrestricted submarine warfare forced President
Wilson to request a declaration of war in April 1917. Following this
action came the daunting task of mobilizing the nation's resources,
both men and materiel, with which to fight. Even after war was declared
the Wilson administration found it difficult to define America's role
in the contest. The War Department initially felt no sense of urgency
regarding mobilization and foresaw no massive commitment of troops to
Europe. Military planners estimated that it would take about two years
to raise and train an army large enough to achieve victory in Europe.15
In the meantime, the administration could only provide moral support
to the Allied cause by responding to the French government's request
to immediately deploy one division. In May 1917 Secretary Baker authorized
the organization of the 1st Expeditionary Division (later redesignated
as the 1st Division), and its elements began arriving in France late
the following month. But these units needed extensive training before
they would be ready to enter the line.
The type of total war being waged in Europe required military forces
far greater than those the nation then had in uniform. Shortly after
the declaration of war Congress passed the Selective Service Act, which
President Wilson signed on 18 May 1917. Unlike the unsuccessful attempt
to draft recruits during the Civil War, this bill eliminated such unfair
practices as bounties, purchased exemptions, and substitutes. Moreover,
local civilian draft boards rather than a federal agency administered
the process. The law aroused little opposition, and about twenty-four
million men registered. Nearly three million of them entered the armed
forces between May 1917 and November 1918. Approximately forty-one thousand
men, or slightly over 17 percent of those inducted, joined the Signal
Corps.16
The selective service legislation also authorized the president to raise
the Regular Army and National Guard to war strength and to mobilize
the National Guard for federal service. It further created a third segment
of the defense structure, known as the National Army, a force to be
raised in two increments of 500,000 men each.17
For the Signal Corps, mobilization meant a rapid and vast expansion.
In April 1917 the ground troops of the Signal Corps consisted of 55
officers and 1,570 enlisted men. These soldiers were divided into 4
field signal battalions, 4 field telegraph battalions, and 6 depot companies
(administrative units with no fixed strength assigned to each territorial
department).18
Shortly after arriving in France, Pershing called for approximately
one million men to be sent over by the end of 1918. As Allied fortunes
declined, Pershing increased his request to one hundred divisions to
arrive by July 1919. These divisions would be organized in
[168]
GENERAL SQUIER
accordance with new tables of organization calling for a "square"
structure-that is, a division comprising two infantry brigades, each
with two infantry regiments. The square divisions, based upon study
of the British and French armies, were to be larger than their predecessors
and include the necessary support troops to withstand sustained combat.
Pershing's request, meanwhile, would require the Signal Corps to supply
at least one hundred field signal battalions, or roughly twenty-five
thousand officers and men as organized in the spring of 1917. While
President Wilson ultimately approved a projected force of only eighty
divisions, the Signal Corps still faced a tremendous task.19
The training of the mass of men called to the colors for signal duty
overwhelmed the capacity of the Signal School at Fort Leavenworth. Thus,
in May 1917 the Corps established additional mobilization and training
camps at Little Silver, New Jersey (Camp Alfred Vail); Leon Springs,
Texas (Camp Samuel F. B. Morse); and the Presidio of Monterey, California.
In 1918 the Signal Corps transferred its activities at Camp Morse and
Fort Leavenworth to Camp Meade, Maryland, where it had earlier opened
a radio school in December 1917. In addition, many of the nation's colleges
and universities offered technical training for prospective Signal Corps
personnel.20
To fight a total war such as that in Europe, the nation, and the Signal
Corps in particular, had to mobilize its technological, scientific,
and economic resources as never before. Consequently a huge bureaucracy
emerged, familiar to us today as the military-industrial complex but
unparalleled at that time, to coordinate the many war-related activities.
Not only did the War Department balloon in size, but the civilian side
of government likewise underwent tremendous expansion.21
To obtain needed technical expertise in communications, the Army called
upon the private sector. While the United States lagged behind Europe
in some major technological areas such as aviation, it led the world
in the field of telephone technology, thanks largely to the achievements
of the Bell System. Unlike its European allies, the U.S. government
did not control the national telegraph and telephone systems in peacetime.
As a preparedness measure, the War Department in 1916 had begun issuing
commissions in the Signal Corps Officers' Reserve Corps to executives
of leading commercial telephone and tele-
[169]
graph companies. John J. Carty, chief engineer of the American Telephone
and Telegraph Company (AT&T), figured prominently in this group.
Commissioned as a major in the Signal Reserve, Carty undertook the recruitment
of men from the Bell System and other communications companies.22
The Army needed a variety of specialists: telephone and telegraph operators,
linemen, and cable splicers, to name a few. (As previously noted, the
prewar Signal Corps had only four telegraph battalions.) The recruitment
of men already possessing the requisite skills obviously lightened the
Signal Corps' training load. Ultimately the Bell System provided twelve
telegraph battalions to the war effort (numbered 401 to 412) that served
at the army and corps levels. Each unit comprised men drawn from a regional
company. The 406th Telegraph Battalion, for example, contained employees
from Pennsylvania Bell, while the 411th came from Pacific Bell. The
Signal Corps obtained another four battalions from railway telegraph
organizations.23
Western Electric, the manufacturing arm of the Bell System, additionally
furnished two radio companies: Company A, 314th Field Signal Battalion,
and Company A, 319th Field Signal Battalion.24
In order to release men for the front lines, the Army employed approximately
two hundred women telephone operators to serve overseas. These women,
who retained their civilian status, became members of the Signal Corps
Female Telephone Operators Unit.25
They are perhaps better known as the "Hello Girls." In order
to operate switchboards in France and England, they needed to be fluent
in both French and English. Moreover, because the Army contained few
French-speaking operators, these women no doubt made inter-Allied communications
proceed much more smoothly. Beginning in November 1917, the Signal Corps
recruited women from the commercial telephone companies; to obtain enough
bilingual operators, the Corps also accepted untrained volunteers who
met the language requirement. After a training period, the first detachment
of women, in the charge of chief operator Grace Banker, departed from
New York City early in March 1918. Soon members of the unit were operating
telephone exchanges of the American Expeditionary Forces in Paris, Chaumont,
and seventy-five other cities and towns in France as well as in London,
Southampton, and Winchester, England.26
The Navy had taken the lead in mobilizing science by establishing
the Naval Consulting Board in 1915. Composed of representatives of the
nation's leading engineering societies and chaired by Thomas A. Edison,
its major activity became the screening of inventions submitted by private
citizens.27
The following year the National Academy of Sciences created the National
Research Council and offered its services to the government to coordinate
military-related research. The council's membership embraced governmental,
educational, and industrial organizations. Chief Signal Officer Squier
served on the council's Military Committee along with the heads of the
other technical bureaus of the Army and Navy. With his scientific background,
Squier actively promoted the council's efforts and exerted considerable
control over its activities in general. First of all, Robert A. Millikan,
professor of physics at the University of Chicago and the council's
exec-
[170]
"HELLO GIRLS"
OPERATE A SWITCHBOARD AT CHAUMONT,
FRANCE
utive officer, became a major in the Signal Corps Officers' Reserve
Corps and served as head of the Signal Corps' new Science and Research
Division, established in October 1917. The division's offices were located
in the building housing the National Research Council, and many of the
council's scientists donned uniforms and served under Millikan. Through
the Officers' Reserve program, the Signal Corps recruited additional
scientists and engineers from the private sector.28
In the past the Signal Corps' Engineering Division had performed
what is now called research and development in its laboratories on Pennsylvania
Avenue and at the Bureau of Standards.29
But the wartime demand for new and improved communication methods fostered
a greater specialization of functions. In July 1917 the chief signal
officer established a separate Radio Division with electrical engineering
becoming a section of the new Equipment Division. After several reorganizations
within the Signal Office, electrical and radio engineering were reunited
as sections of the Research and Engineering Division in July 1918.30
Radio research activities soon outgrew the Signal Corps' existing laboratory
space. Thus, in the spring of 1918, the Corps transferred this work
to new facilities at Little Silver, New Jersey, where a training camp
had already been established. At Camp Vail, located on the site of a
former racetrack, laboratory buildings and several airplane hangars
soon appeared. Later the post would become known as Fort Monmouth.31
The primary mission of the Signal Corps' laboratories was the development
of new types of radios, both air and ground. The Army needed radios
for many
[171]
different purposes-air-to-ground and plane-to-plane communication,
aerial fire-control, direction-finding, and, of course, for ground
tactical communication. Not only did radios have to be made in large
numbers for the first time, they needed to be constructed sturdily enough
to withstand the rigors of combat. In other words, they had to be rugged,
reliable, and portable. To achieve these goals, the Radio Division devoted
considerable effort to the improvement of vacuum tubes. While these
devices had been used prior to the war, in particular as telephone signal
repeaters or amplifiers, they had never been mass-produced. Western
Electric and General Electric manufactured thousands for the Army. The
engineering facilities of these and many other companies provided significant
assistance to the Signal Corps in developing radio apparatus. The Army
also benefited from advances in radio design made by the Navy. The profusion
of new equipment prompted the Signal Corps to adopt standard nomenclature
for its items, and the now familiar letters SCR began to appear. This
designation originally stood for "set, complete, radio" but
has come to signify "Signal Corps radio."32
Despite the conscientious efforts by government and industry, the
limited duration of America's involvement in the war left little time
for the development and application of new technology, and the United
States relied chiefly on Allied radio equipment. Nevertheless, the Signal
Corps made some breakthroughs, especially in airborne radiotelephony,
an achievement on which General Squier placed great emphasis. Not only
would radio allow the pilot and his observer to communicate more easily
between themselves (instead of using hand signals) as well as with the
ground, it would also make voice-commanded squadrons possible. An aero
squadron based at Camp Vail made nearly one hundred flights per week
to test new equipment. In a public demonstration held in early 1918,
President and Mrs. Wilson talked with a pilot flying over the White
House. While some aerial radiotelephone apparatus arrived in France
by the fall of 1918, it did not see use in combat. The Signal Corps
also experimented with land-based radiotelephone equipment, but it did
not attain notable success prior to the Armistice. Although most of
the new devices failed to reach fruition before the war ended, they
had a profound effect on communications in the postwar period, for
out of these wartime efforts grew the American radio broadcasting industry
in the 1920s.33
The electrical engineering section's work was initially hampered by
the transfer of both of the Signal Corps' laboratories to the Radio
Division. With the relocation of the radio facilities to Camp Vail,
electrical engineering returned to the laboratory on Pennsylvania Avenue.
The section's responsibilities included the preparation of drawings
and specifications for all Signal Corps equipment to be produced, except
for radios. It also investigated inventions submitted to the Signal
Corps by private citizens.34
The section's developmental efforts concentrated on designing and adapting
equipment suitable to conditions on the battlefields of France. While
the Signal Corps based its field telephone on a model manufactured by
Western Electric for the Forest Service, the Corps also developed a
special type of phone for use when wearing a gas mask.35
Among their other projects, the sec-
[172]
tion's electrical engineers made improvements in the design of animal-drawn
wire carts, making them relatively light in weight yet strong enough
to carry heavyduty wire. They also worked on the manufacture of a new
type of wire for field lines known as twisted pair, which the Signal
Corps had initially tested in Mexico. This wire derived its name from
its composition of two wires twisted about each other and covered with
insulation. Each wire, in turn, was composed of seven fine wires, four
bronze and three steel. By using twisted pair, also known as outpost
wire, circuits could be made secure because they did not utilize a ground
return that the enemy could easily tap. The wire was manufactured in
various colors in order to readily identify connections in the field:
for example, red for lines to the artillery and yellow to regimental
headquarters. To enable a man on foot to lay and pick up this wire,
the section designed a breast reel that held about a half-mile of wire.
Unfortunately, twisted pair's original light rubber insulation led to
poor performance when wet and caused at least one unit to refer to
it as "please don't rain wire." The wire was subsequently
improved with heavier insulation.36
In the late summer of 1916 Congress created the Council of National
Defense to facilitate national economic and industrial mobilization.
Despite its name, this body did not set policy but rather acted as a
central planning office to coordinate military needs with the nation's
industrial capabilities. The council included the secretaries of war,
navy, interior, agriculture, commerce, and labor, with Secretary of
War Baker serving as chairman. Congress also established an advisory
commission to the council comprising seven prominent specialists from
the private sector. The commission, in turn, divided its work among
several committees, each headed by the member with expertise in that
area. Daniel Willard, president of the Baltimore and Ohio Railroad,
chaired the Transportation and Communication Committee on which Chief
Signal Officer Squier served. Both the National Research Council and
the Naval Consulting Board worked in conjunction with the Council of
National Defense.37
Within the War Department, decentralization hindered mobilization
efforts because the various bureaus continued to act independently.
The resulting chaos crippled the Army's supply system while the nation's
entrance into the war necessitated better coordination. The Signal Corps
competed with all the other branches for its supplies, and the War
Department waited until January 1918 to establish a centralized Purchasing
Service within the Office of the Chief of Staff.38
Meanwhile, in July 1917 the Council of National Defense had created
the War Industries Board which, under the chairmanship of Bernard Baruch,
ultimately wielded considerable influence over the setting of priorities
and the fixing of prices for items purchased by both the United States
government and the Allies. Although military representatives sat on
the board's various commodities sections, the Army successfully resisted
civilian control of its purchasing.39
American business faced the challenge of creating several new industries
to replace products supplied by belligerent nations, particularly Germany.
In connection with the Signal Corps' operations, most high-quality
optical lenses for field glasses and cameras had formerly been imported
from Germany or
[173]
Belgium, now occupied by German forces. Such companies as Bausch
& Lomb of Rochester, New York, stepped in to fill the void. Meanwhile,
citizens were urged to lend their binoculars to the military services.
Germany had also produced most photographic chemicals and materials
which American firms, such as the Eastman Kodak Company, now began to
manufacture.40
While the majority of Signal Corpsmen served overseas, there remained
important communication duties to handle on the home front. Prior to
the war the Corps had installed, maintained, and operated the telephone
systems at most Army posts. The tremendous growth of wartime facilities,
however, overwhelmed the branch's resources, and the Army turned to
the local telephone companies for assistance. The Army contracted with
the Bell System to provide the central office plants and to tie the
post systems into the commercial wire network. Moreover, the Army hired
civilian operators to handle the increased message traffic. The Signal
Corps continued to operate the Alaska communication system, and signal
units performed construction, maintenance, and operations in the Canal
Zone, Hawaii, and the Philippines. Meanwhile, the chronic troubles along
the Mexican border kept the 7th Field Signal Battalion busy during the
war years.41
The Signal Corps did not become involved, however, in the types of
intelligence-gathering operations it had conducted during the War with
Spain, such as the monitoring of cable traffic. Although a Military
Information Division had been created as part of the General Staff in
1903 (superseding the Military Information Division within The Adjutant
General's Office), it had subsequently diminished in importance, becoming
a committee in the 2d (War College) Section of the General Staff. In
1917 the Army established a military intelligence section on the staff,
which by the end of the war had achieved division status. During World
War I the director of military intelligence, rather than the chief signal
officer, acted as the chief military censor, while overseas the chief
of the Intelligence Section, AEF, handled similar responsibilities.42
Likewise, the Signal Corps did not control the national civilian
communication systems during World War I. The president did not take
over the commercial telephone and telegraph systems until July 1918,
and then he placed them under the postmaster general.43
As with other aspects of the war, however, the government created a
sizable and overlapping bureaucracy to control the flow of information
both within the country and with the outside world. In October 1917
the president established the Censorship Board to censor communications
by mail, cable, radio, telegraph, and telephone between the United States
and foreign nations. But the chief signal officer was not a member;
again, the postmaster general administered those operations. In addition,
the Transportation and Communication Committee of the Council of National
Defense, on which the chief signal officer did serve, dealt with the
adaptation of the telephone and telegraph lines to defense needs. In
the case of cable communications, the Navy exercised censorship. The
director of naval communications became the chief cable censor, and
his authority extended to include the War Department cable to Alaska.
The Navy also regulated radio transmissions beginning as early as 1914.
Stations owned by foreign firms caused par-
[174]
ticular concern lest they might be conveying military information.
With the nation's entry into the war, the Navy assumed control over
all radio stations, taking over those needed for naval communications
and closing the rest.44
Once mobilized, the Signal Corps stood ready to provide communications
at home in support of its operations overseas. In October 1917 the chief
signal officer's rank was raised to that of a major general. To better
handle his multifarious duties, Squier reorganized his office on several
occasions as the war progressed. By April 1918 its principal divisions
included Administration, Air, Civilian Personnel, Equipment, Land, Medical,
Science and Research, and Supply. The Land Division had responsibility
for organization and training (exclusive of aviation), telegraph and
telephone service, radio station maintenance, and coast artillery fire
control. Because of their significance, the activities of the Air Division
will be discussed in detail below. With the wartime expansion, the Signal
Office scrambled to find enough space for its personnel in sixteen different
buildings scattered throughout the nation's capital.45
"Over There ": Organization
and Training
When General Pershing set sail for Europe on 28 May 1917 aboard the
British steamship Baltic, his key staff officers accompanied him. Among
them was Col. Edgar Russel, whom Pershing had designated as chief signal
officer of the American Expeditionary Forces. (Russel was promoted to
brigadier general in the National Army on 5 August 1917.) A contemporary
of Squier from the West Point class of 1887, Russel had begun his service
with the Signal Corps during the War with Spain and for several years
had headed the Corps' Electrical Division. Most recently he had served
as chief signal officer of the Southern Department under Pershing's
command. After stopping in England, where Russel observed British signal
practices, Pershing and his staff arrived in France on 13 June and set
up their headquarters in Paris.46
Within AEF headquarters, Pershing placed the Signal Corps under the
Line of Communications, later redesignated as the Services of Supply,
which included the AEF's technical services.47
Russel, in turn, divided his own office into several divisions, the
major ones being Engineering, Telegraph and Telephone, Supplies, Radio,
Photographic, Pigeons, and Research and Inspection.48
The Research and Inspection Division, modeled after similar organizations
in the British and French armies, operated in conjunction with the scientific
efforts being conducted in the United States. The Signal Corps maintained
a laboratory in Paris, and among the civilian scientists recruited to
work there was Edwin H. Armstrong, the young electrical engineer from
Columbia University who had discovered the capabilities of de Forest's
audion. Commissioned as a captain, Armstrong began developing the superheterodyne
radio receiver, which greatly amplified weak signals and enabled precise
tuning. Unfortunately, he could not perfect it prior to the Armistice.
After the war Armstrong became known as the father of FM (frequency-modulated)
radio.49
Another primary project of the divi-
[175]
GENERAL RUSSEL
sion was designing radios for tanks. Moreover, inspection detachments
from this division, located at supply depots and factories, checked
all Signal Corps apparatus received from the United States or purchased
from the Allies before distributing them to the troops.50
For the first few months Russel and his staff undertook the planning
and organization of signal operations for the AEF The chief signal officer's
responsibilities included:
all that pertains to the technical
handling and maintenance of the U.S. military telegraph and telephone
lines and radio stations of the American Army in France. He will
exercise supervision over the duties of the Signal Corps in connection
with the construction, operation and maintenance of all telegraph,
telephone and radio installations of the system.51
His duties did not include aviation, which was managed by a separate
Air Service created by Pershing.
Russel initially leased telephone and telegraph service from the
French, but they had few lines and little equipment to spare. Moreover,
their equipment was antiquated by American standards, and the French
did not "multiplex" their lines to allow them to carry simultaneously
both telephone and telegraph traffic. Such a system required far less
wire and fewer poles, an important consideration given wartime shortages
of material and transport.52
Consequently, planning soon began for the construction of an all-American
wire network to serve the strategic communication needs of the AEF.
This system as initially conceived would run 400 miles across France
to connect the initial base port of St. Nazaire with the rear of the
American sector of operations at Gondrecourt.53
In September 1917 two Bell battalions, the 406th and 407th Telegraph
Battalions, began construction. In keeping with modern American methods,
the system ultimately incorporated repeaters, the latest in telephone
technology, which had recently made coast-to-coast service possible
in the United States.54
As AEF operations expanded, so did the extent of the wire network. By
the end of the war, the Signal Corps built over 1,700 miles of permanent
pole lines and strung nearly 23,000 miles of wire. The entire strategic
network, to include wires leased from and maintained by the French,
totaled approximately 38,000 miles.55
Transatlantic communication also ranked high on Russel's list of priorities,
and the experience he had acquired with underwater cables in Alaska
proved
[176]
TELEGRAPH OPERATING ROOM AT
CHAUMONT
invaluable. Due to the limitations of existing radio technology,
cables remained the most reliable means of long-distance communication.
Early in the war the British had severed Germany's cable connections
with the United States, and the British and French governments had appropriated
and rerouted these cables for their own use. German submarines posed
a constant threat, however, and the presence of this underwater menace
kept repairs from being made. To ensure transatlantic communication
in the event that cable connection was lost, the Navy expanded its series
of high-powered radio stations along the Atlantic coast and constructed
a station at Bordeaux, France, which became known as Radio Lafayette.
The Navy cooperated with the Signal Corps in the use of this system.
The British, meanwhile, laid a cable across the English Channel for
the Corps' use.56
While the U.S. Army established itself in France, Pershing dealt
with the complexities of Allied command relationships. From the outset,
in accordance with Secretary Baker's instructions, Pershing remained
adamant on one point: that the Americans would fight independently and
not be amalgamated with other Allied troops. He had to resist the intense
pressure applied by Allied leaders who were desperate for manpower after
three years of brutal combat and horrific losses. During the spring
of 1917 the French Army had been further weakened by mutiny, while the
British suffered enormous casualties in Flanders. Moreover, the outbreak
of the Bolshevik revolution in Russia in November 1917 led to the collapse
of the Eastern Front the following spring, thus freeing large numbers
of German troops for fighting in the west. Despite these circumstances,
Pershing held his ground.
[177]
In September 1917 Pershing transferred his headquarters to Chaumont,
located on the Marne River about 150 miles southeast of Paris in Lorraine
province. Russel moved along with him. Some Signal Corps operations
remained based in Paris, such as photography, research and inspection,
meteorology, and procurement of supplies.57
Because the sector of the front around Chaumont had been quiet for some
time, Pershing considered it a good place for American forces to eventually
enter the line. Meanwhile, at Gondrecourt elements of the 1st Division,
including its 2d Field Signal Battalion, awaited the start of combat
training.58
With the arrival of American troops, tactical communication in the
forward areas came under the control of the Zone of Advance. Col. George
S. Gibbs, who had served with the Volunteer Signal Corps during the
War with Spain and the Philippine Insurrection, became chief signal
officer, Zone of Advance, as well as assistant chief signal officer
of the AEF. He described his job as follows:
The day's work in the zone of the advance division was quite like
that in the lost and found department of a big railroad. There were
hurried trips to inspect equipment and correct requisitions. Lost shipments
were traced by telephone and sometimes by automobile. Material for
training was needed at once, and the normal means of delivery was neither
fast enough nor sure enough. The personal service from the office Chief
Signal Officer gave assistance right where it was needed, and no signal
outfit was allowed to remain in doubt or in need.59
Moreover, each army, corps, and division had a chief signal officer
who coordinated the signal operations of his unit and carried out the
orders of the chief signal officer, AEF. In March 1918 Russel moved
his office to Tours, the headquarters of the Services of Supply, while
Gibbs remained at Chaumont.60
Organizationally, signal units needed to adapt to conditions on the
Western Front. Trench warfare demanded changes in the structure of the
field signal battalion, specifically in the size of the outpost signal
company. As originally organized with five officers and seventy-five
men, the outpost company could not meet the communications requirements
of a square division. Working at the front lines to connect brigade
and regimental headquarters, these men had an extremely dangerous job.
Consequently, upon Pershing's recommendation, the War Department expanded
the company's enlisted strength to 280 men. As reorganized, the company
was divided into a headquarters section and four regimental sections.
These regimental sections, each containing an officer and sixty-five
men, would remain attached to infantry signal platoons (part of the
headquarters company of an infantry regiment) for the duration of trench
warfare. In open warfare the sections would be withdrawn to form a division
reserve.61
Moreover, a new unit came into existence, the depot battalion, comprising
15 officers and 400 men, which became a source of replacement personnel
overseas. Finally, all Signal Corps personnel not assigned to tactical
organizations became members of service companies that were located
at the base ports, supply depots, and headquarters.62
Because of the scarcity of experienced soldiers in the AEF, considerable
training took place in France. To this end, Pershing established a series
of Army
[178]
schools at Langres that included those for technical training. This
system included three schools for signal instruction: one for the training
of personnel from field units; one for officer candidate training; and
a third for radio operators. Due to the demand for signal officers,
the candidates' school took precedence at Langres while corps-level
schools trained commissioned and noncommissioned officers from field
units. A three-month course for candidates was eventually developed
at Langres which provided instruction in all types of signal equipment
as well as in administration, discipline, and field service regulations.
Besides Signal Corps personnel, the Langres schools trained communicators
from the Infantry, Artillery, Engineers, and Air Service.63
Additional education took place at the divisional level in accordance
with a three-phase training plan devised by Pershing. Beginning with
the 1st Division, soldiers learned the techniques of trench warfare
as well as the handling of such weapons as the hand grenade and the
machine gun. French units conducted the preliminary training, which
included the digging of practice trenches to familiarize the men with
the conditions they would be facing. Members of the 2d Field Signal
Battalion, the first signal unit to undergo this process, received instruction
in both French and British signaling methods and went to the front to
observe signal equipment in action.64
Soon they would be putting their newly acquired skills to the test.
"Over the Top ": Signalmen
in Battle
On 21 October 1917 the units of the 1st Division began spending trial
periods in the trenches. For a month one battalion at a time from each
regiment spent ten days with a French division. A detachment from the
2d Field Signal Battalion supported each infantry battalion. Although
stationed in a quiet area, the division experienced its first combat
on the night of 2-3 November when the Germans bombarded and raided a
portion of the sector, killing several Americans. During the attack
signalmen received their initiation in repairing lines under fire.65
At the end of November the 1st Division pulled back for a final month
of instruction in open warfare tactics, training upon which Pershing
had insisted despite French objections. In January 1918, six months
after its arrival in France, the division began defending its own portion
of the line, a sector northwest of Toul.66
Pershing continued to follow a similar training sequence with subsequent
units as they arrived. Meanwhile, many American officers and Secretary
Baker, not to mention the British and French, grew impatient with the
slow progress. Costly campaigns like that at Caporetto, Italy, in October
1917 continued to bleed the Allies white. Without substantial infusions
of American troops, the Allies could lose the war.67
Fortunately, with the arrival in France of the 2d Division (half Army
and half Marine), as well as two National Guard divisions, the 26th
and the 42d, the Americans slowly but surely began to build their strength.68
Hoping to win a final victory before the Americans could save the
Allies, the Germans launched a massive offensive in the spring of 1918.
They began in
[179]
March by attacking the British lines along the Somme River, with
the objective of splitting the British and French armies. Ironically,
what they finally achieved was the speedier entry of American troops
into the fighting. The Allies increased their pressure upon Pershing
to amalgamate American servicemen with their units, but he remained
firm about the eventual formation of an independent American army. After
prolonged negotiations, Pershing agreed to allow the British to transport
six American divisions to France, where they would train with British
units. He further agreed that during May and June shipment of combat
elements of these divisions (infantrymen and machine gunners) would
receive priority, with artillery, signal, and other support units to
follow. Ten divisions ultimately went to France under this program.69
While this arrangement delayed Pershing's plans for the formation of
an American army, it bolstered Allied morale in the face of the German
onslaught. Furthermore, during the spring crisis the Allies formed a
unified command, headed by General (later Marshal) Ferdinand Foch of
France, to better coordinate operations.
Meanwhile, on 28 May 1918 the 1st Division launched the first American
offensive at Cantigny, in the Picardy region of northern France. This
village, located on high ground in the center of a German salient in
the French lines, had already seen considerable fighting. Prior to the
attack the division carefully outlined and rehearsed the details of
its combat debut.70
Signal planning constituted an important part of the process. In
front of Cantigny the 2d Field Signal Battalion established a communications
network adapted to the conditions of trench warfare. In general, from
division headquarters forward, telephone lines ran to each infantry
battalion as well as between adjoining battalions. But the traditional
lance poles did not prove suitable for use in the trenches. Instead,
the wires were strung on short (four-foot) stakes or run along the trench
walls. The major trunk lines were placed in special shallow trenches
(known as carniveaux) or buried several feet underground
to provide protection from enemy shelling and from foot and vehicle
traffic.71
At division headquarters the telephone switchboards were installed in
underground dugouts where they could withstand artillery bombardment.
Liaison with the artillery was maintained by telephone, and from the
division to the rear, pole, or aerial, lines ran back to the corps with
which it served. Forward from the battalions to the frontline companies
the Signal Corps employed earth telegraphy, which worked by driving
iron poles into the ground to pick up electrical currents by means of
electrical induction. This system was also referred to as T.P.S., from
the French telegraphie par sol.72 Earth
telegraphy did not provide a very secure form of communication because
the Germans could just as easily pick up the messages. Since it did
not depend upon wires, however, it was less vulnerable to artillery.
Due to its limited range, this technique was used primarily at the front.
Wireless sets provided another means of communication, but not yet
a reliable one. When necessary, visual signals supplemented these other
methods.73
The thorough preparation paid off, for the 1st Division initially
took Cantigny fairly easily. During the battle the signal troops went
"over the top" close behind
[180]
SIGNAL COMMUNICATIONS AT THE FRONT
the advancing infantry "and maintained remarkably satisfactory
liaison throughout."74
The repair teams sustained many casualties, however, due to heavy concentrations
of poison gas. While the enemy repeatedly knocked the division's telephones
and radios out of action, the earth telegraphy stations remained in
operation. But holding on to the town proved more difficult. The Germans
launched several counterattacks, and fighting continued for three days.
When the battle finally ended on 31 May, the 1st Division had suffered
substantial losses but remained in possession of its prize. Moreover,
it had demonstrated that the doughboys could fight.75
With this successful introduction to combat, American units began
to shoulder more of the burden of warfare. The 2d Division, fighting
in such costly battles as Belleau Wood and Vaux, helped the French
to stop the German advance toward Paris in the area of Château-Thierry.
By mid-July the German offensive had ground to a halt. For its part
in the defense, the 3d Division earned the nickname "Rock of the
Marne." With the influx of American troops, the Allies launched
a counteroffensive, known as the Aisne-Marne campaign. The deadlock
on the Western Front was finally broken, and the tide of battle began
to turn.
As a result of these events, Pershing's plan for an independent American
army at last was realized. In August 1918 Pershing assumed command of
the newly created U.S. First Army. Lt. Col. Parker Hitt served as the
army's chief signal officer.76
Comprising two corps and nineteen divisions, its initial objective
[181]
was the reduction of the St. Mihiel salient that had jutted into
the Allied lines for four years. The salient spread across the plain
between the Meuse and the Moselle rivers in eastern France. The First
Army, supported by French units and a huge Allied aerial force controlled
by Col. William Mitchell, launched its attack on 12 September.77
As always, the Signal Corps played a vital role in the operation.
For example, members of the 55th Telegraph and 317th Field Signal Battalions,
assigned to the V Corps, had to dig a cable trench six feet deep and
one kilometer long to establish connection with the 26th Division. The
trench ran through a hill of nearly solid rock, and the men had no explosives
available. "For three days and two nights the signal men had one
piece of bread and one cup of coffee a meal each. There was no rest.
When a man fainted from exhaustion his comrades worked the harder, and
even the officers in charge wielded picks and shovels with them.78
To handle communications with the French units, six of the women telephone
operators served at First Army headquarters, less than fifteen miles
from the front.79
The Americans carefully planned the attack on St. Mihiel and maintained
its secrecy. Though the Germans expected such an assault, they did not
know when it would occur. Caught unaware before they had fully carried
out an intended withdrawal, they offered minimal opposition. Advancing
rapidly through what for four years had been no man's land, the first
American units entered St. Mihiel on 13 September. By 16 September the
campaign had come to a successful conclusion; the salient had been
eliminated and an all-American army had won its first victory.80
Before the fighting at St. Mihiel had ended, the Allies began preparations
for a final offensive. The American contribution would be known as the
Meuse-Argonne campaign. In addition to the First Army, American units
participating in the Allied effort included the 2d and 36th Divisions,
which served with the French, and the II Corps, which fought with the
British. Beginning on 26 September, American and French divisions attempted
to surround the German forces in the Argonne Forest of eastern France.
Along with the British and Belgian armies fighting to the north, the
Allies planned to drive the Germans out of France before winter. The
Allies would then push north toward Sedan (a city that France had lost
to Germany in 1870) to cut the vital railroad line that supplied the
German Army. All told, more than a million Americans, most of them with
little or no combat training or experience, took part in this campaign.
Meanwhile the newly created U.S. Second Army (organized 20 September
1918) occupied the old St. Mihiel sector.
Although the troops initially made substantial progress, they eventually
bogged down as the Germans increased their resistance. The defenders
occupied a series of well-fortified positions, known collectively as
the Hindenburg Line, against which the Americans made costly frontal
assaults. From the Argonne foothills on the left and the Heights of
the Meuse on the right, German batteries delivered devastating artillery
fire upon the attackers. In addition to the enemy, the inexperienced
soldiers faced difficulties of transportation and command and control.
The formidable ter-
[182]
TESTING A TELEPHONE LINE LEFT BEHIND
BY THE GERMANS AT ST.
MIHIEL
rain, heavily forested and cut by ravines, hindered movement of any
type, and the existing roadways were usually jammed with men and vehicles.
Man-made obstacles, especially barbed wire, presented additional impediments.
The transportation problem exacerbated the already severe supply shortages
suffered by the Signal Corps in particular and the AEF in general.81
The Signal Corps further lacked sufficient numbers of vehicles to haul
its equipment. In May 1918 control of all motor vehicles had been placed
under the Motor Transport Corps, and "the officers handling Motor
Transport never understood that Signal Corps combat motor vehicles used
for laying wires and maintaining lines were
[183]
technical instruments of that business, not just so much truck tonnage."82
Consequently, signalmen sometimes had to carry poles on their backs
for several miles. Despite the exertions of the Signal Corps, communication
between divisions and corps often broke down, particularly in units
experiencing their first combat. As Pershing remarked regarding the
317th Field Signal Battalion, assigned to the V Corps, this unit "joined
on the eve of battle and had to learn its duties under fire."83
In this last ditch defense, the Germans hurled some of their best
battle-hardened units against the Allies. Nevertheless, despite slow
progress and mounting casualties, the French and American forces inexorably
pushed the Germans back. By 10 October, with the addition of more seasoned
soldiers from the St. Mihiel area, the Americans controlled the Argonne
Forest. But much bitter fighting remained between the Argonne and the
Meuse River before the Americans completely penetrated the Hindenburg
Line. Exhausted and demoralized after four years of combat, the Germans
had no fresh troops to throw into the fray, and the unrelenting pressure
applied by the Allies led the German government to sue for peace.
While diplomatic negotiations proceeded, Pershing prepared for the
final thrust by reorganizing his forces. Maj. Gen. Robert L. Bullard
became commander of the Second Army on 12 October, with Col. Hanson
B. Black as his chief signal officer. Four days later Maj. Gen. Hunter
Liggett assumed command of the First Army. Pershing, meanwhile, took
control of the new army group.84
After restoring his battered troops to combat readiness, Liggett resumed
the offensive on 1 November. Forcing the Germans to withdraw behind
the Meuse, the Americans pursued them in the direction of Sedan. During
this rapid advance, the Signal Corps succeeded in maintaining communications
by using the German permanent lines.85
American units had reached the outskirts of Sedan when the signing of
the Armistice ended the campaign and the war on 11 November 1918.86
Each signal unit participating in this campaign made its own unique
contribution to victory. One that merits specific mention is the 325th
Field Signal Battalion of the 92d Division, the only black signal unit
to serve in World War I. Arriving in France in June 1918, the 325th
had first undergone training and then served in the trenches of the
St. Die sector for four weeks before heading for the Argonne. A platoon
of the 325th, supporting the 368th Infantry, saw action during the
battle. In addition to their signal duties, several platoon members
volunteered to take a German machine gun nest encountered while scouting
a location for a new command post. One of these signalmen, Cpl. Charles
S. Boykin, was killed during this engagement, which ultimately succeeded
in capturing the enemy position.87
Throughout the Meuse-Argonne campaign, members of the Female Telephone
Operators Unit continued to work at First Army headquarters, now located
near Verdun, with the initial complement of six supplemented by seven
additional women. On 13 October a fire broke out in the barracks housing
the main switch-
[184]
MEMBERS OF THE 325TH
FIELD SIGNAL BATTALION
STRING WIRE IN NO MAN'S LAND
board. The women remained on duty until they were finally forced
to evacuate, but they returned to their posts within an hour. Their
devotion to duty won them a commendation from the chief signal officer
of the First Army.88
A detachment of women also served at Second Army headquarters, but not
during active operations. Grace Banker, who was chief operator at First
Army headquarters, received a Distinguished Service Medal for her wartime
efforts.89
While most signalmen served in France, some saw action in other locations.
In September 1918, Company D, 53d Telegraph Battalion, arrived in Vladivostok
to provide communications for American troops in Siberia. The following
month the 112th and 316th Field Signal Battalions, belonging to the
37th and 91st Divisions, respectively, went to Belgium to participate
in the fighting in the Ypres district. The Army also sent a detachment
of signal soldiers to Italy to serve with the signal platoon of the
332d Infantry.90
As for signaling methods, wire communications, in particular the field
telephone, proved to be the chief means of signaling used by the United
States Army during World War I. A field telephone could operate over
a range of from fifteen to twenty-five miles, and a field telegraph,
which required less current, could relay messages up to hundreds of
miles.91
The Signal Corps soon found, however,
[185]
that it had to make some adjustments to its equipment. It learned
early that the buzzer, which had operated well on the Mexican border
and was best suited to use on improvised field lines, could be easily
intercepted by the enemy. Later in the war improved buzzerphones came
into use.92
Furthermore, the inadequate insulation of outpost wire enabled the Germans
to intercept messages by means of leaks through the ground. The introduction
of heavier insulation alleviated the problem. Since the signalers left
most of this wire where it lay, the Army used tremendous quantities.
By the summer of 1918 the United States manufactured twenty thousand
miles of outpost wire per month.93
To increase mobility, the Signal Corps developed portable telegraph
and telephone stations, mounted on truck chassis. Because the truck's
engine supplied power to the storage batteries, each station could operate
independently. Myer's telegraph train had entered the age of the automobile.94
The Germans, influenced by the successful use of the telephone by
the Japanese in the Russo-Japanese War, had discarded the telegraph
as obsolete in 1910. They entered World War I entrusting their communications
to the telephone and the radiotelegraph. The shortcomings of these methods,
especially for long-distance communications, soon caused the German
Army to reinstate wire telegraphy as part of its signaling system.95
Although radio held great promise for military communications, the
instruments available during World War I proved unsuitable for extensive
frontline use. The prewar radios used by the Signal Corps had been relatively
high-powered sets designed for a large operating area; they were not
meant to be used in the restricted conditions of trench warfare where
their inability to be finely tuned caused them to interfere with the
sets used by the Allies. Moreover, the spark-gap equipment weighed too
much-up to 500 pounds-to be easily moved and often broke down. With
the assistance of European radio experts, the Signal Corps developed
its own models and had approximately twenty-five different types in
production when the war ended. In the meantime, American forces used
French radios. Despite some improvements, particularly in the production
of vacuum tubes, "radio carried little of the war's communications
load," a fact that had a direct impact on the battlefield.96
The high combat casualty rates of World War I can partly be attributed
to the lack of a reliable wireless communications system. Once soldiers
went "over the top," they found themselves isolated. During
deafening artillery barrages a commander could not control his men with
his voice, and vision became limited amid the fog of battle. In order
to maintain contact, troops tended to move in groups that made them
easy targets for enemy machine gunners. Although wire lines were portable,
they could not last long under constant and withering artillery bombardment
that chewed them to bits; what the shellfire spared often fell victim
to the treads of tanks or other vehicles. With their communications
cut off, attackers found it difficult if not impossible to call for
reinforcements or artillery support.
The situation did not improve significantly under defensive conditions.
Shelling continued to destroy wire lines, and standard radio antennas
proved a
[186]
popular enemy target. To solve the latter problem, the Signal Corps
developed a loop set with a receiving antenna that lay on the ground
and a small loop connected with the spark gap that served as the transmitting
antenna.97
Radio's chief role was for intelligence purposes. While aviators handled
reconnaissance and intelligence gathering from the air, the signalmen
on the ground used their radios to obtain information about the enemy.
The Radio Division of the chief signal officer, AEF, had responsibility
for both air and ground radio operation, including radio intelligence,
and a radio section served with each field army.98
At intercept stations, Signal Corpsmen copied coded messages sent from
German ground radio stations and forwarded them to the radio sections
for decoding. In addition to those in the field, the Signal Corps operated
an intercept station at general headquarters. (At listening stations
located in no man's land, the Signal Corps similarly monitored enemy
telephone and telegraph messages.)99
Using goniometry, or direction finding by means of measuring angles,
Signal Corpsmen also obtained bearings on enemy radio transmitters so
that the location of the stations could be identified.100
Goniometric stations could also detect incoming airplanes from their
radio signals. Furthermore, from the amount of radio traffic, the strength
of enemy troops could be determined. Radios could also be used to divert
the Germans away from where attacks were being planned by broadcasting
false radio traffic. The Signal Corps successfully exercised this ploy
prior to the resumption of the offensive along the Meuse on 1 November
1918. The radio section of the Signal Corps worked closely with the
radio intelligence section of the General Staff, passing along the information
it collected for transcription and analysis regarding enemy operations
and intentions.101
Although cryptography, the enciphering and deciphering of messages
according to specified codes, had been included in the curriculum of
the Signal School since 1912, the Signal Corps had not strictly practiced
communications security prior to the war. The new War Department Telegraph
Code of 1915 had chiefly served as an economy measure to reduce the
length of transmissions, rather than as a means to assure their secrecy.102
In the AEF, however, the office of the chief signal officer included
a Code Compilation Section where officers devised the so-called River
and Lake Codes, which were distributed to the First and Second Armies,
respectively, for use in both wire and wireless communications. Maj.
Joseph O. Mauborgne, future chief signal officer and head of the Research
and Engineering Division, developed an improved field cipher device
which replaced the cipher disk. Mauborgne's apparatus, a cylinder with
twenty-six rotating disks, bore a striking similarity to one invented
by Thomas Jefferson when he was secretary of state to protect diplomatic
correspondence. However, the existence of the earlier device remained
unknown until 1922, when a researcher found its description among Jefferson's
papers.103
To enforce security, listening stations monitored friendly traffic for
lapses in procedure.104
While signal officers performed cryptography, military intelligence
officers conducted cryptanalysis, or the breaking of unknown codes.105
[187]
Despite advances in speed, electrical communications could not always
be relied upon to get the message through. Wire communications, in particular,
were extremely vulnerable to artillery fire and the ravages of wheeled
and tracked vehicles, not to mention enemy wire cutters. Thus, the Signal
Corps built a measure of redundancy into its communications system
as insurance. Traditional communication methods, such as runners and
mounted messengers, continued to perform their services, with the use
of motorcycle dispatch riders constituting a modern variation. Signal
repair parties also used motorcycles, when they were available, to travel
to the scene of a problem.106
Visual signaling had likewise not entirely disappeared from the Signal
Corps' arsenal. The familiar red and white wigwag flags remained in
use to a limited extent, but the flagstaff underwent some changes. Since
the wooden staffs broke rather easily, the Corps contracted with a fishing
rod company to manufacture steel staffs.107
Other visual signaling methods included pyrotechnics (rockets, flares);
battery-powered electric lamps, based on a French model, to replace
the previously used acetylene type; and projector lamps. The heliograph
remained in the Army's inventory but received little if any use. To
communicate with airplanes, ground troops placed panels in various
prearranged patterns upon the ground.108
Carrier pigeons contributed another "low-tech" but effective
means of communication. In July 1917, impressed with the French and
British pigeon services, Pershing requested that pigeon specialists
be commissioned into the U.S. Army. The Signal Corps had used the birds
rather unsuccessfully in Mexico, but without properly trained handlers.
In November 1917, the Signal Corps' Pigeon Service received official
authorization, and a table of organization for a pigeon company to serve
at army level was published the following June. The company comprised
9 officers and 324 soldiers and provided a pigeon group to each corps
and division.109
By the war's end the Signal Corps had sent more than fifteen thousand
trained pigeons to the AEF.110
Probably the most famous use of pigeons occurred during the fighting
in the Argonne Forest in October 1918 when elements of the 77th Division,
commanded by Maj. Charles W Whittlesey, became separated and trapped
behind the German lines. These units became known as the "Lost
Battalion." When runners could no longer get through, Whittlesey
employed pigeons to carry messages back to division headquarters requesting
supplies and support. After several days without relief, with hope for
survival fading and friendly artillery fire raining down, the men pinned
their lives on their last bird, Cher Ami, to get word back to silence
the guns. With one eye gone, his breast bone shattered, and a leg missing,
Cher Ami completed his mission. In recognition of his remarkable accomplishment,
Cher Ami received a medal and a pension.111
Although the Signal Corps had been taking pictures since the 1880s,
World War I marked the first time that photography had been assigned
to the branch as an official function. In July 1917 the Corps established
a Photographic Section responsible for both ground and aerial photography
at home and abroad.112
A
[188]
SIGNAL CORPS
PHOTOGRAPHER OPERATES A CAMOUFLAGED CAMERA IN FRANCE
school for land photography opened at Columbia University in January
1918, followed six weeks later by an aerial photography school at the
Eastman Kodak Company in Rochester, New York.
Signalmen began documenting the war aboard the Baltic, taking still
and motion pictures of Pershing and his staff. The Army controlled all
combat photography, and civilian photographers were not permitted to
operate within the zone of the AEF. A photographic unit served with
each division and consisted of one motion-picture operator, one still
photographer, and their assistants. Each army and corps headquarters
had a photo detachment of one officer and six men.113
Photographic units also served with such private agencies as the American
Red Cross and the Young Men's Christian Association (YMCA) to document
their activities. Photographic technology had progressed considerably
since the days of Mathew Brady, and a combat photographer in World War
I could develop a picture in fifteen minutes using a portable darkroom.
By 1 November 1918 the Signal Corps had taken approximately 30,000 still
pictures and 750,000 feet of motion pictures that were used for training,
propaganda, and historical purposes. Wartime censorship kept the public
from seeing the most graphic images, however. The Signal Corps' invaluable
photographic collection resides today in the National Archives.114
[189]
Aerial photography included pictures taken from planes and balloons.
As a new discipline, it required the development of suitable equipment
and techniques. The Signal Corps' aerial photographers performed photo
reconnaissance and aerial mapping that provided valuable intelligence
about enemy forces and their disposition. Edward J. Steichen, who later
became one of the world's most famous photographers, served as an officer
in the Photographic Section of the Air Service, AEF.115
Another Signal Corps function, dormant for many years, gained new
prominence: meteorology. Before the United States entered the war,
the British, French, and German armies had created meteorological sections.
Commanders needed meteorological information for many purposes: to support
antiaircraft and longrange artillery; aviation; sound ranging to detect
enemy artillery; and general operational planning. The use of gas warfare
also required knowledge of wind currents and velocity. Russel soon discovered
that he, too, needed weather warriors and requested that trained observers
be sent overseas. Consequently, in June 1917, the Signal Corps established
the Meteorological Section, and Lt. Col. Robert A. Millikan of the Science
and Research Division drew up plans for the meteorological service both
at home and in Europe.116
Because the Signal Corps no longer contained trained meteorologists,
Squier sought the assistance of the National Research Council and other
outside agencies to obtain qualified men. Ironically, many of the Corps'
wartime meteorological personnel came from the ranks of the Weather
Bureau.117
One such individual, William R. Blair, received a commission as a major
in the Signal Corps' Officers Reserve Corps and became chief of the
Meteorological Service in the AEE.118
Beginning in May 1918 the section established stations at aviation
and artillery training centers. Stations in the combat zone were normally
linked to corps headquarters by telephone but transmitted information
to tactical units by radio. The meteorological section of the AEF eventually
numbered 49 officers and 404 men divided among 33 forecasting and observation
stations.119
Meanwhile, within the United States the Signal Corps set up its first
weather station in November 1917 at a familiar location, Fort Omaha,
Nebraska. Eventually the Corps had stations at most Army posts and flying
fields.120
Through a variety of means, the Signal Corps successfully supplied
communications to the front lines, and its casualty figures reflected
that fact. Its total of 2,840 casualties ranked second only to the Infantry.
This figure is particularly impressive because the Signal Corps (less
its Aviation Section) comprised only about 4 percent of the total AEF.121
Over three hundred decorations, both American and foreign, were awarded
to Signal Corps personnel, but none of them received the Medal of Honor.122
Following the Armistice, Pershing had warm words of praise for his signal
soldiers who "in spite of serious losses in battle, accomplished
their work, and it is not too much to say that without their faithful
and brilliant efforts and the communications which they installed, operated,
and maintained, the successes of our Armies would not have been achieved."123
[190]
The Signal Corps Loses
Its Wings
The European powers, utilizing the aviation establishments they had
developed in the preceding years, made World War I the first air war.
Germany entered the conflict with nearly one thousand planes; France
with about three hundred; and England approximately two hundred fifty.124
Despite being the first country to give its army wings, the United States
was not prepared for participation in aerial combat. In April 1917 the
Signal Corps' Aviation Section comprised just 52 officers and 1,100
men plus 210 civilian employees. Its inventory contained just 55 planes,
all of which were training models.125
The Signal Corps had no combat aircraft because it continued to stress
aviation's reconnaissance mission. The War Department reinforced this
view by retaining aviation within the Signal Corps instead of making
it a separate service. Although Congress had finally appropriated substantial
sums for the aviation program, "the sudden availability of funds,"
as Maj. Benjamin D. Foulois observed, "does not buy an instant
air force."126
This lesson, unfortunately, would be learned the hard way.
As with other aspects of the war, the Army had done little planning
for aviation, and the small scale on which aerial activities had previously
been conducted provided few lessons upon which the Aviation Section
could draw. Furthermore, while the United States remained a neutral
power, the Allies had been reluctant to allow American observers to
study air operations. When asked by Congress in 1914 whether we were
keeping up with foreign developments, Colonel Reber had replied, "As
far as it is possible to say, we are keeping abreast of conditions that
we do not know anything about."127
There had been a few exceptions, however. In addition to Squier's secret
visits to the front while an attaché in England, another signal officer,
Maj. William Mitchell, had gone abroad in March 1917.128
Yet the United States had gained very little current information on
which to base its aerial program.
Once the United States entered the war, the Allies expected it to
contribute significantly to the aviation effort. After three years of
fighting, their air as well as ground forces were nearing exhaustion.
In a telegram to President Wilson dated 24 May 1917, French Premier
Alexandre Ribot requested that the United States provide 4,500 planes,
5,000 pilots, and 50,000 mechanics by the spring of 1918. He further
asked that the Americans build 2,000 airplanes and 4,000 engines each
month. Unfortunately, the cable did not specify the types of planes
or the proportions in which they should be produced. Ribot's request
nonetheless became the basis for the War Department's aviation program.129
Fulfilling the order would be quite an accomplishment for a nation
that had no aviation industry to speak of: only about one thousand planes,
both military and civilian, had been built in the United States from
1903 to 1916.130
In fact, the nation had only about a dozen aircraft manufacturing companies,
the Curtiss Aeroplane and Motor Corporation being the largest.131
Nevertheless, various government officials, including the members of
the Council of National Defense and the Aircraft Production Board, optimistically
assumed that the automotive industry could quick-
[191]
GENERALS FOULOIS
AND PERSHING
ly convert its mass production techniques to the building of aircraft.132
They believed America would rise to the challenge. As Howard Coffin,
former president of the Hudson Motor Car Company and now head of the
Aircraft Production Board, remarked in a speech in New York on 8 June,
"The road to Berlin lies through the air. The eagle must end this
war."133
In the press, headlines heralded that American planes would soon "darken
the skies of Europe." Even Chief Signal Officer Squier remained
undaunted by the job ahead and spoke of our "winged cavalry"
that would "sweep the Germans from the sky."134
The onerous task of turning Ribot's cable into a concrete program
fell to Major Foulois. Sharing the prevailing optimism but with a sense
of urgency, he came up with a total figure of nearly 17,000 planes (12,000
for combat and 4,900 for training) and 24,000 engines to be manufactured
during the next year. He estimated the cost of such a program at nearly
two-thirds of a billion dollars.135
In keeping with the Signal Corps' emphasis on reconnaissance, observation
and pursuit planes (to protect the former) predominated in Foulois'
plan over the offensive aircraft that had become so important as the
war progressed. Foulois, having recently been promoted to brigadier
general, became the chief of the Aeronautical Division in the Office
of the Chief Signal Officer on 30 July 1917, and he served in that capacity
until November 1917 when he went overseas to direct aviation at the
front. Hap Arnold, having been promoted to full colonel in August 1917,
became the division's executive officer.136
Despite its ambitious goals, the aviation program suffered from a
fatal flaw-decentralization of control. In addition to the Signal Corps,
a large number of agencies and individuals, both military and civilian,
had a voice in its development: Coordination between them proved difficult
if not impossible.137
In 1915, Congress had created the National Advisory Committee for Aeronautics
(NACA) "to supervise and direct the scientific study of the problems
of flight" and also "to direct and conduct research and experiment
in aeronautics."138
The committee consisted of up to 12 members appointed by the president:
2 from the Army; 2
[192]
from the Navy; 1 each from the Smithsonian, the Weather Bureau, and
the Bureau of Standards; and up to 5 other qualified individuals, either
civilian or military. Initially Scriven and Reber were the Army's representatives,
with Scriven serving as chairman in 1915 and 1916.139
The Aircraft Production Board, created by the Council of National Defense
in May 1917, supervised the manufacturing activities of both the Army
and the Navy. Both General Squier and his naval counterpart, Admiral
David W Taylor, sat on this board along with various prominent businessmen.
It became a separate entity in November 1917.140
A third body, the Joint Army and Navy Technical Aircraft Board, also
formed in May 1917, attempted to standardize the types of aircraft built
by each service.141
Pershing further complicated matters when he created the Air Service,
AEF, in June 1917. In his words, "as aviation was in no sense a
logical branch of the Signal Corps, the two were separated in the A.E.F.
as soon as practicable and an air corps was organized and maintained
as a distinct force."142
Although this separation worked well on the battlefield, it created
complications at home. Once the leaders in Washington put the aviation
program into place, they had to respond to orders received from Pershing
and his staff that often conflicted with the advice given by officers
in Europe reporting directly to the Signal Office. Members of Allied
missions to Washington also added their advice. The constantly changing
requirements for airplanes resulted in frequent revisions to the production
program, thus creating more delays than planes. The lack of a clear
direction to the aviation program, coupled with its decentralized control,
led to serious problems.143
The Joint Army and Navy Technical Aircraft Board was the first to
consider Foulois' proposal, approving it on 29 May. Having leaped this
hurdle, Squier decided to save time by bypassing the chain of command
and sent the plan directly to Secretary Baker. Baker, for his part,
endorsed the proposal and forwarded it directly to Congress without
consulting the General Staff. Responding to widespread public enthusiasm
for aviation, Congress appropriated $640 million, the largest sum appropriated
for a single purpose to that time, and President Wilson approved the
sum on 24 July.144
From the start, manufacturers faced a serious obstacle that hampered
production: the maze of patents controlling the manufacture of airplane
components. The automobile industry had earlier solved a similar situation
with a cross-licensing agreement through which the manufacturers pooled
their patents. The NACA, with Squier as a key participant in the negotiations,
played a critical role in working out a comparable arrangement for
the aircraft industry. In this case, the Manufacturers Aircraft Association
was formed to administer the agreement.145
It charged a flat fee for the use of each patent within the pool and,
in turn, reimbursed the patent holders. This consensus finally brought
an end to the patent fight between the Wright and Curtiss interests.146
Through Squier, the NACA became involved in the selection of a site
for an aviation proving ground for the Signal Corps. The location chosen,
what is now known as Langley Air Force Base in Newport News, Virginia,
also became the
[193]
site of the NACA's Langley Aeronautical Laboratory.147
As an active committee member, Squier also helped develop nomenclature
for the emerging aircraft industry. For example, he urged the adoption
of the word "airplane" to replace the previously used term,
"aeroplane."148
Even with the patent licensing agreements, the United States still
faced serious aircraft production problems. The assumption that the
nation's automobile industry could be easily converted to the manufacture
of airplanes did not prove valid. American airplanes were still chiefly
custom-built and could not readily be adapted to mass production. To
secure the necessary technical expertise, the government requested
that France, England, and Italy send to this country experienced aircraft
pilots, engineers, and designers to assist in developing both manufacturing
and training methods. To obtain up-to-date information from the front,
the chief signal officer dispatched a fact-finding mission to Europe.
Headed by Maj. Raynal C. Bolling, and hence known as the Bolling Commission,
the group left in mid-June to discuss aviation matters with the Allies
and to determine which types of aircraft the United States should build.149
At the end of July the group issued its report recommending four major
types of planes: the British De Haviland DH-4 for observation and daylight
bombing; the French SPAD and British Bristol for fighters; and the Italian
Caproni for night bombing.150
They even sent home models of these planes for the manufacturers to
follow. For training purposes, the Army adopted the Curtiss JN-4 (nicknamed
the Jenny). With these guidelines, the American production effort began.151
In addition to administrative obstacles, there remained many other
hurdles to clear before the aviation program got off the ground, especially
the procurement of the necessary raw materials. World War I planes remained
relatively fragile structures fashioned mainly of wood, preferably spruce,
which is lightweight yet strong and less prone to splintering than other
softwoods. The Allies, however, could not supply enough aircraft quality
timber to meet their wartime needs. Although the forests of the Pacific
Northwest contained bountiful supplies of the needed spruce, labor strife
prevented the mills from meeting the demand. Therefore, the Army stepped
in. In November 1917 the Signal Corps created the Spruce Production
Division with headquarters at Portland, Oregon. Its operation represents
one of the more unusual aspects of the Signal Corps' aviation-related
activities during World War I. Under the command of Col. Brice P Disque,
the division eventually employed nearly thirty thousand "spruce
soldiers" in the forests and lumber mills of the Northwest. In
a successful effort to ease the labor unrest, the Army organized civilian
forestry workers into a new union, the Loyal Legion of Loggers and Lumbermen.152
Planes also required fabric, usually linen, for covering their outer
surfaces. Before the war Belgium, Russia, and Ireland had been the principal
suppliers of flax. With Ireland remaining as the sole source following
Belgium's occupation by the Germans and the Russian revolution, another
material had to be found. Scientists at the Bureau of Standards developed
a suitable substitute made of mercerized cotton. With the change in
fabric, a new formula also had to be creat-
[194]
DE HAVILAND
AIRPLANES WITH LIBERTY ENGINES BEING MANUFACTURED
AT THE
DAYTON-WRIGHT COMPANY
ed for the "dope," a varnish-like substance used to coat
the fabric to protect, tighten, and waterproof it. Consequently, the
government oversaw the establishment of factories to produce the required
chemicals.153
The Signal Corps became involved in yet another new endeavor when it
became necessary to obtain castor beans from India and cultivate over
100,000 acres of them to yield the oil used to lubricate aircraft engines.154
Other impediments to production included the need to translate the
metric measurements used in European aircraft designs into inches and
feet. Besides the planes themselves, the Army also had to supervise
the manufacture of numerous auxiliary items, such as instrumentation;
machine guns, bombs, and other armament; radios; cameras; and special
clothing for the pilots.155
American pilots did not carry parachutes until the postwar period.156
Finally, shipping delays, with priority given to the movement of ground
troops, slowed the delivery of the planes and engines once they had
been built.
While the United States depended heavily on European aircraft technology,
it did contribute something new and noteworthy to military aviation:
the Liberty engine. Designed by two automotive engineers, Jesse G. Vincent
and Elbert J. Hall, the initial eight-cylinder model generated two hundred
horsepower and was
[195]
produced in less than six weeks. The twelve-cylinder version achieved
over three hundred horsepower, and further modification increased its
output to more than four hundred. The twelve-cylinder Liberty went into
mass production and became the standard American aircraft engine both
during and after the war.157
The Liberty finally solved the dilemma faced by the Wright brothers
and their successors since 1903 of finding an engine that was relatively
light yet could generate sufficient horsepower for sustained flight.158
While the Liberty engine itself met with success, efforts to adapt the
selected European-designed planes to accommodate it did not. Only the
De Havilands underwent successful conversion and mass production by
American manufacturers. De Haviland planes fitted with the twelve-cylinder
Liberty engine were called Liberty planes.159
Unfortunately, the De Havilands became better known as "flaming
coffins" because of their vulnerability to explosion upon being
hit.160
American factories had produced over 15,000 Liberty engines by the end
of the war, but only a fraction of these reached the front.161
Although Congress made generous wartime appropriations for aviation
(Squier requested a billion dollars for fiscal year 1919 and received
$800 million), the United States did not succeed in putting many planes
into the air. Fewer than one thousand American-built planes saw action,
despite the promises of darkened skies. Throughout the war American
pilots relied mainly upon French machines.162
While the Army struggled with its production plight, it had no trouble
attracting aviation personnel. Thousands volunteered, lured by the
romance of the Air Service and the possibility of becoming an "ace."
To screen these candidates, the Signal Corps pioneered in the use of
psychological testing.163
It lacked, however, the training facilities to turn these men into pilots.
At the outbreak of the war, the Army had just two permanent flight schools,
one at San Diego and another at Mineola, Long Island, which had been
established in 1916 for training National Guard and Reserve personnel.
A third field, a temporary facility at Essington, Pennsylvania (near
Philadelphia), had opened just five days before the United States entered
the conflict. As part of his planning function, Foulois had selected
sites for new installations, and eventually the War Department was operating
twenty-seven training fields within the United States. These included
Wilbur Wright Field, located on Huffman Prairie not far from Dayton,
Ohio, where the brothers had conducted many of their early experimental
flights and which is now part of Wright-Patterson Air Force Base.164
During the summer of 1917, while the new fields were being built,
the Canadian government provided flying facilities in exchange for the
use of American fields during the winter. Moreover, many cadets, especially
in these early months of American involvement, received their training
in England, France, and Italy. In addition to the training fields at
home, the United States eventually constructed sixteen flying fields
in Europe, the largest being the aviation center at Issoudun, France,
that covered an area of thirty-six square miles.165
[196]
As the problems at San Diego had indicated, however, pilot training
was not a simple process. While it took three to four months to train
a ground soldier, the time required to adequately train a pilot could
be anywhere between six and nine months.166
First, prospective pilots underwent two to three months of ground, or
pre-flight, training at several leading universities where they studied
the theory and principles of flight.167
The students next moved on for six to eight weeks of preliminary flight
training at the Signal Corps aviation schools, which culminated in a
solo 60-mile cross-country flight.168
They then graduated to advanced training where they specialized in
reconnaissance, pursuit, or bomber flying. Once overseas, the pilots
underwent combat training behind the lines.169
In addition to flying, all pilots were instructed in aerial gunnery.
Specialized radio and photographic personnel also had to be trained,
as well as mechanics to keep the planes in the air.170
The Air Service, AEF, could not make its presence felt at the front
until the last months of the war, and a detailed discussion of its combat
operations will not be given here. When the United States entered the
war, only the 1st Aero Squadron had been immediately available to serve
overseas, and it had arrived in Europe on 1 September 1917. The unit
received training in France as an observation squadron and became part
of the I Corps Observation Group under French tactical control.171
Although their service was relatively brief, American aviators gave
a good account of themselves.172
As part of its aviation program, the Signal Corps renewed its interest
in lighter-than-air craft. In Europe captive balloons were being used
for artillery observation, and the observers communicated with the ground
via telephone. Shortly after the declaration of war the Signal Corps
reopened its Balloon School at Fort Omaha.173
The Army also established balloon schools at Camp John Wise, Texas (near
San Antonio); Arcadia, California (later known as Ross Field); and Lee
Hall, Virginia. Veteran Army aeronaut Col. Charles deF. Chandler was
in charge of the Balloon Service, AEF, and seventeen balloon companies
eventually saw action.174
In addition, Millikan's Science and Research Division conducted a variety
of experiments with balloons, among them attempts to use them to distribute
propaganda.175
The beginning of the end for the Signal Corps' Aviation Section came
in November 1917 when Gutzon Borglum (later the sculptor of the presidents
at Mount Rushmore, South Dakota), a member of the Aero Club of America,
accused the War Department of plotting to give control of the aircraft
industry to the automobile manufacturers. With President Wilson's permission,
Borglum launched his own investigation of the aircraft industry.176
Hoping to reassure the public, Secretary of War Baker announced on 21
February 1918, just before leaving for France, that the first American
planes with Liberty engines were on their way to the front, giving the
impression that production was ahead of schedule. Rather than ease tensions,
he had added fuel to the fire. In actuality, only one DH-4 had been
shipped from Dayton, and it was destroyed when the Germans torpedoed
the ship carrying it to Europe. Not until May 1918 did the first
[197]
COLONEL DEEDS
American-built DH-4 fly in France.177
The press, meanwhile, had been printing exaggerated stories about the
thousands of American planes in France. Pershing, in response, sent
a cable to Baker on 28 February in which he urgently recommended that
the publication of such articles be stopped.178
As the public became aware of the shortcomings in the aviation program,
the backlash began.
Borglum, in his report to the president, claimed that the Aircraft
Production Board had squandered the hundreds of millions of dollars
appropriated by Congress. Singling out Edward A. Deeds, head of the
Signal Corps' Equipment Division and thereby in charge of aircraft procurement,
as the culprit, Borglum caused a sensation. Before the war Deeds had
gained prominence as a businessman in Dayton, having served as an executive
of the National Cash Register Company and as a founder of the Dayton
Engineering Laboratories Company (Delco). He was also one of the organizers
of the Dayton-Wright Airplane Company.179
To conduct his wartime work with the Signal Corps, Deeds had received
a commission as a colonel. Although Wilson ultimately repudiated Borglum,
the wheels of change had been set in motion as Congress and other agencies
began probing into aviation matters.180
Acting Secretary of War Benedict Crowell, in Baker's absence, had
ordered an investigation, as did Chief Signal Officer Squier and Howard
Coffin, chairman of the Aircraft Production Board.181
The Crowell committee's preliminary report, issued on 12 April, pointed
out that few soldiers had possessed any knowledge of aviation when the
program began, and a tremendous burden had fallen upon a relatively
small division of the Signal Corps. It recommended that military aviation
be immediately removed from the Signal Corps and that aviation eventually
become a separate department.182
During its own investigation, the Senate Committee on Military Affairs
questioned Deeds and found his answers to be satisfactory. Its final
report, however, labeled the aircraft program a "substantial failure."183
Amid the controversy, Squier did receive some support. Charles D.
Walcott, secretary of the Smithsonian and a member of the NACA, wrote
to the president on 15 April urging him to withdraw only aircraft production
from the Signal Corps' control.184
The public and press, however, feeling betrayed by the promises
[198]
of a vast aerial fleet, came down hard on the chief signal officer.
The New York Times was especially critical of Squier, judging him a "lamentable failure."185
On 24 April 1918 Secretary Baker initiated the actions that led to
the Signal Corps' loss of its aviation duties. On that date he created
two new entities within the Office of the Chief Signal Officer: the
Division of Aircraft Production and the Division of Military Aeronautics.
The latter had charge of the operation and maintenance of aircraft and
the training of personnel. John D. Ryan, former president of the Anaconda
Copper Company, became head of the Division of Aircraft Production,
while Brig. Gen. William L. Kenly became the director of the Division
of Military Aeronautics. Kenly had served as chief of the Air Service,
AEF, from August to November 1917.186
Chief Signal Officer Squier would henceforth devote his full attention
to the Signal Corps proper.187
The final separation came on 20 May 1918 when the president issued
an executive order completely detaching aviation duties from the Signal
Corps and placing them under the direct control of the secretary of
war. The Division of Military Aeronautics and the Bureau of Aircraft
Production thereupon became independent agencies within the War Department.
The Signal Corps continued to retain, however, responsibility for airborne
radio.188
But the scrutiny of the air service had not yet ended. Beginning in
May 1918 the Justice Department, at President Wilson's behest, launched
a thorough inquiry into the aeronautical program. Charles Evans Hughes,
former presidential candidate and future secretary of state and chief
justice of the Supreme Court, headed this probe.189
After five months of work, in which almost three hundred witnesses testified,
the attorney general turned over Hughes' findings to the president.
Aviation's problems, the report concluded, stemmed largely from disorganization
and incompetence rather that rampant corruption. Hughes had found evidence
of wrongdoing, however, on the part of Edward A. Deeds, against whom
Borglum had leveled serious charges.190
While Hughes cleared Deeds of Borglum's more sensational accusations
of major corruption and pro-Germanism, he found that Deeds had used
his position within the Signal Corps to benefit the Dayton-Wright Company.
Hughes also held him responsible for grossly misleading the public in
regard to the progress of the aircraft production program. His report
therefore recommended that Deeds be court-martialed, since he still
held his military commission.191
As for the chief signal officer, the investigation had found no "imputation
of any kind upon Gen. Squier's loyalty or integrity."192
With the imminent end of the war, however, the public outcry over aviation
abated, and an Army board of review subsequently exonerated Deeds of
any wrongdoing.193
As in any dispute, it is easy to cast blame, and Squier received
his share. Grover Loening, who became an aircraft manufacturer after
leaving the Army's employ in 1915, accused the chief signal officer
of being a dupe of the automobile manufacturers.194
Robert A. Millikan, who had directed the Signal Corps' Science and Research
Division, described Squier as a "strange character" who "considered
himself a scientist." Millikan further referred to Squier as "in
no sense an organizer nor a man of balanced judgment." While Millikan
credited Squier with "a will-
[199]
ingness to assume responsibility and go ahead," he nonetheless
disparaged his "quick, impulsive decisions."195
Deeds, on the other hand, who had also worked closely with the chief
signal officer, thought highly of his abilities.196
Whatever his strengths or weaknesses, Squier cannot be held solely
responsible for the Signal Corps' loss of aviation. The separation
of this function from the Corps had been impending for some time and
was probably inevitable. The pilots had always chafed under the control
of non-flyers. Aviation was fast becoming an armed service in its own
right, although it would not achieve independent status until after
World War II. Despite the controversy surrounding his wartime program,
Squier's significant contributions to aviation should not be overlooked.
He had played a central role in the development of Army aviation from
its inception, having urged the Army to investigate the Wrights' invention
and drafted the Army's initial airplane specifications.197
Moreover, he had overseen the greatest expansion of the aerial arm to
date while concurrently running the Signal Corps' ground operations.
That one man would have difficulty managing all these activities should
not be surprising.
Less than ten years had passed from the time the Army purchased its
first airplane until the United States entered World War I and, on
balance, the Signal Corps' Aviation Section had achieved a great deal
by May 1918. Despite short comings and failures, which were not restricted
to the Signal Corps' operations alone, the Corps had laid the foundation
for the air program that the Army followed for the duration of the
war. From a one man/one plane air force in 1907, the Army's Air Service
had grown by November 1918 to nearly two hundred thousand officers,
men, and civilian workers. During the course of the war the Army had
received nearly seventeen thousand planes from both domestic and foreign
manufacturers.198
With the removal of the aviation function, the Signal Corps also lost
some prominent names from its rolls, among them Mitchell, Foulois, and
Arnold. While passing from the pages of Signal Corps history, these
men continued their notable careers with the Army's Air Service.199
The Signal Corps Comes of Age
The aviation story constituted yet another episode in the evolution
of the Signal Corps' mission as changes in technology constantly redefined
the nature of military communications. Once before the Signal Corps
had experienced the wrenching away of a major function, weather reporting,
only to see military meteorology achieve new importance under its auspices
during World War I.
In the case of the weather service, the cost of what was perceived
as a mostly civilian duty had grown too much for the military to justifiably
maintain. With aviation, the case was somewhat different. Clearly, aviation
performed a military mission, and its relationship to communication
was recognized and accepted. But aviation had outgrown its early beginnings
when reconnaissance was seen as its only military purpose. Now its combat
value was beginning to overshadow its other roles. Although Chief Signal
Officers Scriven and Squier had recognized
[200]
that aviation would eventually strike out on its own, they had not
been ready to let it go. As with the weather service, it took the touch
of scandal to precipitate events. But the Signal Corps' child, aviation,
had grown and matured much faster than its parent had anticipated. Like
any offspring, it was rebellious and agitated for independence, not
only from the Signal Corps but in the postwar period from the Army as
a whole.
Aviation aside, the Signal Corps as a branch was negotiating an institutional
rite of passage of its own. During the war it had multiplied its strength
by a factor of nearly thirty-five. Comprising just 55 officers and 1,570
men when Congress declared war, the Corps had grown to 2,712 officers
and 53,277 men when the war ended. These men were organized into 56
field signal battalions, 33 telegraph battalions, 12 depot battalions,
6 training battalions, and 40 service companies.200
Besides the huge increase in size, the Signal Corps that emerged from
World War I differed significantly in other ways from the organization
that had entered the conflict. The Corps had become a technical leader
with its own laboratories: It could no longer confine its scientific
work to the basement of the Signal Office in Washington. Along with
the unprecedented scale of Signal Corps operations came closer ties
with the nation's industrial leaders. While the Corps gained much in
strength and efficiency, it also lost something: the force of personality.
Figures such as Myer, Greely, and Squier would no longer loom as prominently
over and direct so closely the workings of what had become a complex
bureaucracy. Although powerful and important individuals would continue
to appear in subsequent chapters of the Signal Corps' history, the branch
no longer functioned as the sole province of one man: the chief signal
officer. In a sense, the Signal Corps had lost its innocence; as an
organization, it had reached maturity.201 | -distance communications, soon caused the German
Army to reinstate wire telegraphy as part of its signaling system.95
Although radio held great promise for military communications, the
instruments available during World War I proved unsuitable for extensive
frontline use. The prewar radios used by the Signal Corps had been relatively
high-powered sets designed for a large operating area; they were not
meant to be used in the restricted conditions of trench warfare where
their inability to be finely tuned caused them to interfere with the
sets used by the Allies. Moreover, the spark-gap equipment weighed too
much-up to 500 pounds-to be easily moved and often broke down. With
the assistance of European radio experts, the Signal Corps developed
its own models and had approximately twenty-five different types in
production when the war ended. In the meantime, American forces used
French radios. Despite some improvements, particularly in the production
of vacuum tubes, "radio carried little of the war's communications
load," a fact that had a direct impact on the battlefield.96
The high combat casualty rates of World War I can partly be attributed
to the lack of a reliable wireless communications system. Once soldiers
went "over the top," they found themselves isolated. During
deafening artillery barrages a commander could not control his men with
his voice, and vision became limited amid the fog of battle. In order
to maintain contact, troops tended to move in groups that made them
easy targets for enemy machine gunners. Although wire lines were portable,
they could not last long under constant and withering artillery bombardment
that chewed them to bits; what the shellfire spared often fell victim
to the treads of tanks or other vehicles. With their communications
cut off, attackers found it difficult if not impossible to call for
reinforcements or artillery support.
The situation did not improve significantly under defensive conditions.
Shelling continued to destroy wire lines, and standard radio antennas
proved a
[186]
popular enemy target. | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | yes_statement | the "brontosaurus" and the apatosaurus were the same "dinosaur".. the "brontosaurus" and the apatosaurus are interchangeable names for the same "dinosaur". | https://thecontentauthority.com/blog/brontosaurus-vs-apatosaurus | Brontosaurus vs Apatosaurus: Deciding Between Similar Terms | Brontosaurus vs Apatosaurus: Deciding Between Similar Terms
When it comes to the world of dinosaurs, there are few debates as contentious as the one between the brontosaurus and the apatosaurus. These two massive creatures have been pitted against each other for years, with enthusiasts on both sides arguing over which one is the true king of the sauropods.
So, which one is the proper word? The answer is actually a bit more complicated than you might think. For years, scientists believed that the brontosaurus was a separate species from the apatosaurus, but recent research has shown that they are actually one and the same. Despite this, the name “brontosaurus” has stuck around in popular culture, and is still used to refer to these massive creatures.
So, what do these names actually mean? “Brontosaurus” translates to “thunder lizard,” while “apatosaurus” means “deceptive lizard.” Both names are fitting for these massive creatures, which were known for their size and strength.
Now that we’ve cleared up the confusion around these two names, it’s time to dive deeper into the world of the brontosaurus/apatosaurus and explore what made these creatures so fascinating.
Define Brontosaurus
Brontosaurus is a genus of large, herbivorous sauropod dinosaurs that lived during the Late Jurassic period, approximately 155 to 140 million years ago. The name “Brontosaurus” means “thunder lizard” in Greek and was given to the dinosaur due to its massive size and impressive presence.
The Brontosaurus was characterized by its long neck, small head, and massive body. It had a long, whip-like tail that it could use to defend itself against predators. Brontosaurus was one of the largest land animals to ever exist, reaching lengths of up to 72 feet and weighing as much as 38 tons.
Define Apatosaurus
Apatosaurus is another genus of large, herbivorous sauropod dinosaurs that lived during the Late Jurassic period, approximately 155 to 140 million years ago. The name “Apatosaurus” means “deceptive lizard” in Greek and was given to the dinosaur due to its confusing taxonomy.
The Apatosaurus was similar in appearance to the Brontosaurus, with a long neck, small head, and massive body. However, it had a slightly different tail structure, with a thicker base and a shorter whip-like end. Apatosaurus was also one of the largest land animals to ever exist, reaching lengths of up to 75 feet and weighing as much as 38 tons.
Brontosaurus vs Apatosaurus Comparison Table
Brontosaurus
Apatosaurus
Period
Late Jurassic
Late Jurassic
Name Meaning
Thunder Lizard
Deceptive Lizard
Length
Up to 72 feet
Up to 75 feet
Weight
Up to 38 tons
Up to 38 tons
How To Properly Use The Words In A Sentence
When it comes to using the words “brontosaurus” and “apatosaurus” in a sentence, it’s important to understand the subtle differences between these two dinosaur species. Here’s a guide on how to use each word properly:
How To Use Brontosaurus In A Sentence
Brontosaurus is a genus of sauropod dinosaur that lived during the Late Jurassic period. Here are some examples of how to use “brontosaurus” in a sentence:
The brontosaurus was one of the largest land animals to ever exist.
Scientists recently discovered a new species of brontosaurus in South America.
Children love learning about the brontosaurus because of its long neck and tail.
When using “brontosaurus” in a sentence, it’s important to note that this term has been the subject of some controversy in the scientific community. In the early 20th century, it was believed that the brontosaurus and apatosaurus were two distinct species. However, further research has shown that the brontosaurus was actually a juvenile apatosaurus.
How To Use Apatosaurus In A Sentence
Apatosaurus is another genus of sauropod dinosaur that lived during the Late Jurassic period. Here are some examples of how to use “apatosaurus” in a sentence:
The apatosaurus was originally known as the brontosaurus until scientists realized it was a juvenile.
Apatosaurus had a long neck and tail, which it used to reach vegetation high up in trees.
Despite its massive size, the apatosaurus was a herbivore and posed no threat to humans.
When using “apatosaurus” in a sentence, it’s important to note that this term is now widely accepted as the correct name for the species formerly known as the brontosaurus. Using “brontosaurus” instead of “apatosaurus” may lead to confusion or misunderstandings.
More Examples Of Brontosaurus & Apatosaurus Used In Sentences
As we continue to explore the differences between Brontosaurus and Apatosaurus, it’s helpful to see how these names are used in everyday language. Let’s take a look at some examples of using Brontosaurus and Apatosaurus in a sentence:
Examples Of Using Brontosaurus In A Sentence:
The Brontosaurus was one of the largest dinosaurs to ever roam the earth.
My son loves playing with his Brontosaurus toy.
Scientists recently discovered a new species of Brontosaurus in South America.
The Brontosaurus had a long, whip-like tail that it could use to defend itself.
Many people mistakenly believe that the Brontosaurus and Apatosaurus are the same dinosaur.
The Brontosaurus had a small head in proportion to its massive body.
Some scientists believe that the Brontosaurus was actually a type of Apatosaurus.
My favorite dinosaur is the Brontosaurus because of its impressive size.
The Brontosaurus lived during the Jurassic period, approximately 150 million years ago.
The Brontosaurus was an herbivore, meaning it only ate plants.
Examples Of Using Apatosaurus In A Sentence:
The Apatosaurus was previously known as the Brontosaurus.
The Apatosaurus had a long neck that it used to reach leaves high up in trees.
The Apatosaurus had a small brain in proportion to its massive body.
The Apatosaurus was one of the largest animals to ever walk the earth.
Scientists believe that the Apatosaurus may have been able to produce low-frequency sounds to communicate with other dinosaurs.
The Apatosaurus lived during the late Jurassic period, approximately 150 million years ago.
The Apatosaurus was a herbivore, meaning it only ate plants.
The Apatosaurus had a long, whip-like tail that it could use to defend itself.
Some scientists believe that the Apatosaurus may have had a hump on its back.
The Apatosaurus was first discovered in the late 1800s by paleontologist Othniel Charles Marsh.
Common Mistakes To Avoid
When it comes to the world of dinosaurs, there are few topics that generate as much confusion as the difference between brontosaurus and apatosaurus. While these two massive creatures are often used interchangeably in popular culture, the truth is that they are distinct species with their own unique features and characteristics.
Highlighting Common Mistakes
One of the most common mistakes people make when using brontosaurus and apatosaurus interchangeably is assuming that they are the same animal. While it’s true that these creatures share many similarities, including their long necks and massive size, they are actually two distinct species with their own unique traits.
Another common mistake is assuming that brontosaurus is the correct name for these creatures. In fact, the term “brontosaurus” was actually based on a misidentification of apatosaurus fossils in the late 1800s. While the name “brontosaurus” was used for many years, it was eventually recognized as incorrect and replaced with the correct name, apatosaurus.
Tips For Avoiding Mistakes
If you want to avoid making these common mistakes when discussing brontosaurus and apatosaurus, there are a few tips to keep in mind. It’s important to do your research and make sure you understand the differences between these two species. This includes studying their physical characteristics, behaviors, and habitats.
Additionally, it’s important to use the correct terminology when referring to these creatures. While “brontosaurus” may be a more familiar term, it’s important to recognize that it’s incorrect and use the correct name, apatosaurus, instead.
Finally, if you’re ever unsure about the differences between these two species, don’t be afraid to ask an expert. Whether you’re talking to a paleontologist, a museum curator, or a fellow dinosaur enthusiast, there are plenty of resources available to help you learn more about these fascinating creatures.
Context Matters
When it comes to discussing the differences between the brontosaurus and the apatosaurus, context is key. Depending on the specific context in which these giant dinosaurs are being discussed, the choice of which name to use can vary.
Examples Of Different Contexts
One context in which the choice of name matters is in scientific research. In this context, it is important to use the correct scientific name to ensure accuracy and avoid confusion. In this case, the correct name to use is apatosaurus, as it is the more recently recognized name and is therefore considered to be the scientifically accurate name for this dinosaur.
Another context in which the choice of name matters is in popular culture. In movies, TV shows, and other forms of entertainment, the name brontosaurus is often used to refer to this dinosaur. This is likely due to the fact that the name brontosaurus is more well-known and has been used in popular culture for much longer than the name apatosaurus.
Additionally, in educational contexts, the choice of name may depend on the age group of the audience. Young children may be more familiar with the name brontosaurus, while older students and adults may be more familiar with the name apatosaurus.
Overall, the choice between brontosaurus and apatosaurus depends on the specific context in which they are being used. While apatosaurus is the scientifically accurate name, brontosaurus is more commonly used in popular culture and may be more familiar to certain audiences. It is important to consider the context and audience when deciding which name to use.
Exceptions To The Rules
While the rules for using brontosaurus and apatosaurus are generally straightforward, there are some exceptions where these rules may not apply. Here are a few examples:
1. Paleontological Discoveries
As paleontologists continue to make new discoveries and advancements in their field, there may be instances where the classification of a dinosaur changes. In these cases, the rules for using brontosaurus and apatosaurus may need to be reassessed.
For example, in 2015, a team of scientists discovered that what was previously thought to be a species of apatosaurus was actually a new genus of dinosaur altogether. This discovery meant that the rules for using apatosaurus needed to be updated to exclude this new species.
2. Regional Differences
While the rules for using brontosaurus and apatosaurus are generally accepted worldwide, there may be regional differences in how these terms are used. For example, in some countries, the term “brontosaurus” may be more commonly used to refer to all species of sauropod dinosaurs, regardless of their scientific classification.
Similarly, in some regions, the term “apatosaurus” may be used to refer to a specific species of dinosaur, while in other regions, it may be used more broadly to refer to all members of the apatosaurinae subfamily.
3. Linguistic Context
In certain linguistic contexts, the rules for using brontosaurus and apatosaurus may not apply. For example, in casual conversation or popular culture, these terms may be used interchangeably or incorrectly without any significant consequences.
However, in scientific or academic contexts, it is important to use these terms correctly and in accordance with their scientific classification. This ensures that there is no confusion or ambiguity when discussing these fascinating creatures.
Practice Exercises
Now that you have a better understanding of the differences between brontosaurus and apatosaurus, it’s time to put your knowledge into practice. Here are some exercises to help you improve your understanding and use of these two dinosaur names:
Exercise 1: Fill In The Blank
Fill in the blank with either brontosaurus or apatosaurus:
The __________ was originally named by Othniel C. Marsh in 1879.
Some scientists argue that the __________ never actually existed.
The __________ had a longer neck than the __________.
Many people still refer to the __________ as the “thunder lizard.”
Answers:
brontosaurus
apatosaurus
apatosaurus, brontosaurus
brontosaurus
Exercise 2: Multiple Choice
Choose the correct answer:
Which dinosaur was discovered first?
a) brontosaurus
b) apatosaurus
Which dinosaur had a longer tail?
a) brontosaurus
b) apatosaurus
Which dinosaur was taller?
a) brontosaurus
b) apatosaurus
Answers:
b) apatosaurus
a) brontosaurus
b) apatosaurus
Exercise 3: Sentence Completion
Complete the sentence with the appropriate dinosaur name:
The __________ was a herbivore that lived during the Late Jurassic period.
The __________ was originally named “deceptive lizard” because of its unusual vertebrae.
Despite being extinct for millions of years, the __________ still captures the imagination of people around the world.
Answers:
apatosaurus
apatosaurus
brontosaurus
By practicing with these exercises, you can become more confident in your understanding and use of these two dinosaur names. Keep in mind that while they may seem interchangeable, there are important differences between brontosaurus and apatosaurus that should be acknowledged.
Conclusion
After exploring the differences between brontosaurus and apatosaurus, it is clear that there is more to these dinosaurs than meets the eye. While they may have been similar in appearance, their distinct characteristics and histories set them apart.
Key Takeaways
Brontosaurus was a misidentified dinosaur that was later recognized as a species of apatosaurus.
Apatosaurus had a longer tail and a different bone structure than brontosaurus.
Language and grammar usage can have a significant impact on the perception of information.
As writers and communicators, it is important to pay attention to the language we use and ensure that we are conveying accurate information. By understanding the differences between similar terms, we can better inform and educate our audiences.
Shawn Manaher is the founder and CEO of The Content Authority. He’s one part content manager, one part writing ninja organizer, and two parts leader of top content creators. You don’t even want to know what he calls pancakes.
About Us
The Content Authority is where you will find great content, written by amazing writers, around topics like grammar, writing, publishing, and marketing. | Brontosaurus vs Apatosaurus: Deciding Between Similar Terms
When it comes to the world of dinosaurs, there are few debates as contentious as the one between the brontosaurus and the apatosaurus. These two massive creatures have been pitted against each other for years, with enthusiasts on both sides arguing over which one is the true king of the sauropods.
So, which one is the proper word? The answer is actually a bit more complicated than you might think. For years, scientists believed that the brontosaurus was a separate species from the apatosaurus, but recent research has shown that they are actually one and the same. Despite this, the name “brontosaurus” has stuck around in popular culture, and is still used to refer to these massive creatures.
So, what do these names actually mean? “Brontosaurus” translates to “thunder lizard,” while “apatosaurus” means “deceptive lizard.” Both names are fitting for these massive creatures, which were known for their size and strength.
Now that we’ve cleared up the confusion around these two names, it’s time to dive deeper into the world of the brontosaurus/apatosaurus and explore what made these creatures so fascinating.
Define Brontosaurus
Brontosaurus is a genus of large, herbivorous sauropod dinosaurs that lived during the Late Jurassic period, approximately 155 to 140 million years ago. The name “Brontosaurus” means “thunder lizard” in Greek and was given to the dinosaur due to its massive size and impressive presence.
The Brontosaurus was characterized by its long neck, small head, and massive body. It had a long, whip-like tail that it could use to defend itself against predators. Brontosaurus was one of the largest land animals to ever exist, reaching lengths of up to 72 feet and weighing as much as 38 tons.
Define Apatosaurus
Apatosaurus is another genus of large, herbivorous sauropod dinosaurs that lived during the Late Jurassic period, approximately 155 to 140 million years ago. | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | yes_statement | the "brontosaurus" and the apatosaurus were the same "dinosaur".. the "brontosaurus" and the apatosaurus are interchangeable names for the same "dinosaur". | https://www.dictionary.com/browse/brontosauri | Brontosaurus Definition & Meaning | Dictionary.com | Scientific definitions for brontosaurus
word history
Take a little deception, add a little excitement, stir them with a century-long mistake, and you have the mystery of the brontosaurus. Specifically, you have the mystery of its name. For 100 years this 70-foot-long, 30-ton vegetarian giant had two names. This case of double identity began in 1877, when bones of a large dinosaur were discovered. The creature was dubbed apatosaurus, a name that meant deceptive lizard or unreal lizard. Two years later, bones of a larger dinosaur were found, and in all the excitement, scientists named it brontosaurus or thunder lizard. This name stuck until scientists decided it was all a mistake-the two sets of bones actually belonged to the same type of dinosaur. Since it is a rule in taxonomy that the first name given to a newly discovered organism is the one that must be used, scientists have had to use the term apatosaurus. But thunder lizard had found a lot of popular appeal, and many people still prefer to call the beast brontosaurus.
Cultural definitions for Brontosaurus
A large herbivorous (seeherbivore) dinosaur, perhaps the most familiar of the dinosaurs. The scientific name has recently been changed to Apatosaurus, but Brontosaurus is still used popularly. The word is from the Greek, meaning “thunder lizard.” | Scientific definitions for brontosaurus
word history
Take a little deception, add a little excitement, stir them with a century-long mistake, and you have the mystery of the brontosaurus. Specifically, you have the mystery of its name. For 100 years this 70-foot-long, 30-ton vegetarian giant had two names. This case of double identity began in 1877, when bones of a large dinosaur were discovered. The creature was dubbed apatosaurus, a name that meant deceptive lizard or unreal lizard. Two years later, bones of a larger dinosaur were found, and in all the excitement, scientists named it brontosaurus or thunder lizard. This name stuck until scientists decided it was all a mistake-the two sets of bones actually belonged to the same type of dinosaur. Since it is a rule in taxonomy that the first name given to a newly discovered organism is the one that must be used, scientists have had to use the term apatosaurus. But thunder lizard had found a lot of popular appeal, and many people still prefer to call the beast brontosaurus.
Cultural definitions for Brontosaurus
A large herbivorous (seeherbivore) dinosaur, perhaps the most familiar of the dinosaurs. The scientific name has recently been changed to Apatosaurus, but Brontosaurus is still used popularly. The word is from the Greek, meaning “thunder lizard.” | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | yes_statement | the "brontosaurus" and the apatosaurus were the same "dinosaur".. the "brontosaurus" and the apatosaurus are interchangeable names for the same "dinosaur". | https://www.nps.gov/articles/000/tourism-and-dinosaurs.htm | Tourism and Dinosaurs (U.S. National Park Service) | Abstract
The 1964 New York World’s Fair wowed visitors with dazzling displays of technology and industry promising a bright future. Yet one of its most popular attractions was Sinclair Oil’s Dinoland, a life-size display of ancient reptiles. Inspired by the company’s well-known green brontosaurus logo, men, women, and children by the thousands visited Dinoland.
A public fascination with dinosaurs has spurred countless roadside attractions featuring evidence of the creatures throughout the United States. The discovery of fossilized tracks on private land near Glen Rose, Texas, in the early 20th century captivated local imaginations. During the Good Roads Movement of the 1920s, intrepid out-of-towners hunted for the tracks in riverbeds and overgrown foliage. Scientific study came with the New Deal, when a statewide Work Projects Administration paleontology survey sponsored excavation for museum exhibits. A desire for tourist dollars helped ensure the dinosaur tracks received protection and remained publicly accessible, ultimately resulting in the creation of Dinosaur Valley State Park in 1968.
Shortly thereafter, a merger between Sinclair Oil and the Atlantic Richfield Company retired the former’s famous trademark, an act that presented Glen Rose with a unique opportunity. Local officials, recalling Dinoland’s post-World’s Fair national tour, wished to acquire the dinosaurs for their new state park. Texas Governor Preston Smith helped secure a Tyrannosaurus rex and Brontosaurus; the pair arrived via a 40-foot-long trailer truck in July 1970.
It is rather fitting that two dinosaurs purchased with oil money welcome motoring tourists to Dinosaur Valley State Park. Yet Rex and Bronto, Texas State Parks’ most anomalous “residents,” pose significant preservation and interpretive challenges: the maintenance of life-size ancient beasts now historic in their own right, the juxtaposition of scientific inquiry along roadside kitsch, and the continuing debate between evolution and creationism.
Bronto and Rex welcome visitors. Now over 50 years old, pose significant preservation and interpretive challenges: maintenance of life-size sculptures, the juxtaposition of roadside kitsch along scientific inquiry, and the continuing debate between evolution and creationism.
Jennifer L. Carpenter
“Drive with Care and Buy Sinclair”
Visitors parking their cars at the 1964-1965 New York World’s Fair could likely recite the Sinclair Oil slogan. The company, founded in 1916 by Harry F. Sinclair following a successful investment in Oklahoma’s early oil boom, owned thousands of gas stations across the United States, each adorned with a green brontosaurs logo. Sinclair trademarked its reptilian symbol in 1932 after an advertising campaign featuring multiple dinosaur species revealed the brontosaurus’s popularity among customers. “Dino” quickly became the face of the company’s marketing efforts. [1] If, for some reason, a World’s Fair-goer had not yet “Ma[d]e a Date With Dino” and fueled up the Sinclair way, he or she would be able to do so upon exit at one of two special service stations erected in the Fair’s parking lot.[2]
Sinclair Oil's brontosaurus brand appealed to current and future drivers, as demonstrated by this ad selling both gasoline for driving adults and dinosaur soap for children.
Corpus-Christi Times Caller, June 8, 1965
What these World’s Fair visitors may not have known, however, was exactly “Why Sinclair is interested in Dinosaurs.” The company’s contribution to the international extravaganza was Dinoland, a 34,418-square-foot “realistic and authentic Re-creation of Life-size Dinosaurs and the prehistoric world in which they lived.”[3] According to Sinclair, dinosaurs “dramatize[d] the age and quality of the crude oils from which Sinclair Petroleum Products are made—crudes which were mellowing in the earth millions of years ago when the Dinosaurs lived.”[4] In other words, a dinosaur mascot was a natural choice for a fossil fuel business, as it reminded customers of the precious resource powering their cars. A less altruistic reason was, perhaps, the logo’s appeal to children, who would beg their parents to stop at the gas station with the giant-but-gentle mascot.
Today, at Dinosaur Valley State Park in Glen Rose, Texas, excited children implore their parents to let them visit the park’s own reptilian residents, Bronto and Rex, before continuing on to view fossilized tracks in the riverbed. What these 21st-century families may not realize is that the two dinosaurs greeting them in Glen Rose welcomed New York World’s Fair visitors 50-plus years earlier. How did two oversized reptiles, artifacts from one of the United States’ most significant cultural events of the mid-20th century, end up in one of Texas’s smallest counties?
New York World’s Fair, 1964-1965
Encompassing 646 acres of Flushing-Meadows Corona Park in the borough of Queens, the 1964-1965 New York World’s Fair introduced visitors to new world cultures, foods (most memorably, the Belgian waffle), and high-tech innovations in the spirit of “Peace Through Understanding.” The same site had hosted “The World of Tomorrow,” the 1939-1940 World’s Fair, during the difficult days of the Great Depression. But the city’s second international spectacular would be unlike its predecessor, and indeed, unlike all previous global expositions. Headed by New York’s “Master Builder” Robert Moses, the event lacked an official Bureau of International Exhibitions (BIE) endorsement, which led several leading nations to decline invitations to participate. Instead, Moses turned to American corporations to fill exhibit pavilions and foot the bills. They responded in force.
Dinoland, Sinclair Oil's exhibit at the 1964-1965 New York World's Fair, took an estimated 10 million visitors back in time during the Space Age
Steve Fasnacht, flickr.com/electrospark
Companies such as Ford, Kodak, and RCA spent thousands of dollars on exhibits, rides, and activities to showcase their latest, cutting-edge products. The Space Age was in full swing, and Fair-goers waited in long lines to take in dazzling displays of technology and industry promising a bright future. General Motors’ “Futurama” exhibit forecasted gleaming modern metropolises (just as their World of Tomorrow exhibit had 25 years earlier), Bell Telephone introduced a video conferencing prototype “Picturephone,” IBM showcased “thinking” computers that aimed to work like a human brain—and there were jetpacks! DuPont and Owings-Corning’s newest synthetic materials, plastic and fiberglass, allowed more flexibility in architectural and product design. Innovation merged with nostalgia in attractions produced by Walt Disney. Early versions of “It’s a Small World,” sponsored by Pepsi-Cola, and a lifelike, speaking Abraham Lincoln for the State of Illinois pavilion helped Disney perfect his “animatronic” technology.[5]
Juxtaposed against the science of the future was a science about the past: paleontology. Situated in the Fair’s “Transportation Area,” sandwiched between the Lowenbrau Beer Garden and the U.S. Rubber/Royal Tires Ferris Wheel,[6] was one of the fair’s most popular attractions: Sinclair Oil’s Dinoland. Inspired by the company’s recognizable brontosaurus logo, this life-size display of ancient reptiles in antediluvian settings captured the imaginations of kids and adults alike. Fair-goers came face-to-face with ferocious beasts of the past! They learned the difference between carnivorous and herbivorous dinosaurs, the location in the United States where these prehistoric reptiles once roamed, and the meaning behind their Latin names. The exhibit also allowed Sinclair to tout its geological expertise and multi-million dollar research facilities, which, the company hoped, would make visitors of driving age more inclined to purchase Sinclair Oil products for their cars.
The 1964-1965 New York World’s Fair was not the first to display life-size dinosaurs. London’s 1852 Crystal Palace Exposition featured a reconstructed dinosaur model. Sinclair’s reptiles debuted at the Chicago’s World Fair of 1933-1934, travelled to Dallas for the 1936 Texas Centennial, and were a popular attraction at the 1939-1940 New York World’s Fair. A new, updated set was desired for the 1964 event, better reflective of the latest in paleontological research, with some models incorporating moving elements for added believability.[7] Crafting one life-size dinosaur would be a challenging commission for any sculptor, but Sinclair ordered nine for its exhibit—a Tyrannosaurs rex, Brontosaurus, Triceratops, Stegosaurus, Corythosaurus, Anklylosaurus, Struthiomimus, Trachodon, and Ornitholestes. The company turned to a seasoned sculptor to ensure its primordial creatures would impress the general public and professional scientists.
Louis Paul Jonas, Master Wildlife Sculptor
Dinoland featured the work of Louis Paul Jonas, a Hungarian-born taxidermist and sculptor who was known for his meticulous attention to scientifically-correct detail. Jonas immigrated to the United States in 1906 to work with his brothers in a Colorado taxidermy shop. At age 18, while on a trip to New York City’s American Museum of Natural History, he met famed taxidermist, sculptor, biologist, and conservationist Carl Akeley, and became his assistant.[8] Jonas later launched his own business, mounting trophies for big game hunters, but he continued to use his skills for educational purposes. From an abandoned-railroad-station-turned-studio in Mahopac, New York, Jonas built large-scale habitat groups and wildlife displays for museums, as well as small animal figurines. It mattered little whether Jonas had actually seen the animal in the wild. A 1942 magazine article marveled at his ability to learn: “Jonas had never seen a panda, got it up entirely from books, bones, and skin.”[9] Jonas similarly tackled large subjects, such as elephants, with gusto.
Sculptor Louis Paul Jonas poses with a to-scale brontosaurs model in his New York studio
Berkshire Eagle, August 18, 1962
By the time of the World’s Fair, Jonas’s decades of experience made him the top choice for the Sinclair Oil assignment. True to form, Jonas wanted to get it right, working with renowned paleontologists Barnum Brown, credited with discovering the Tyrannosaurus rex, and John Ostrom, curator of vertebrate paleontology at Yale University’s Peabody Museum. “No task is too large for the Jonas crew to handle,” proclaimed The Rotarian magazine in 1960,[10] but it took three years and 15+ assistants to complete this prehistoric commission.[11]
Jonas's finished dinosaurs arrive in the New York City harbor ready for installation at the Fair
Sinclair Oil website, www.sinclairoil.com/dino-history
Working out of a bigger studio in Claverack, N.Y., the Jonas team sketched each dinosaur before crafting 1/10th scale clay models. Transparencies of the models were then projected to actual size on a wall. A plywood frame came next, which was wrapped with wire mesh and filled out with burlap and plaster. This basic form was covered with modeling clay to give the dinosaurs their skin and face detail. The clay was cut into large pieces after it dried, and a plaster mold created for each section. A polyester and fiberglass resin was sprayed and painted into the molds to create “shell” pieces, which, once dry, were bolted onto a lightweight steel frame. Seams were smoothed and sanded, followed by a thorough paint job. Jonas’s dinosaurs were more lifelike than their concrete predecessors, as new materials such as fiberglass allowed for subtle touches like skin texture. Still, the sheer size of the sculpted beasts was amazing: Jonas estimated that his Brontosaurus weighed 5,000 pounds.[12]
A journey through Dinoland promised "A reenactment of life on earth as it was some 60-million to 180-million years ago," with reproduction dinosaurs of all sizes, including the Brontosaurus, "one of the largest land creatures that ever lived.”
The Exciting World of Dinosaurs: Sinclair Dinoland, New York World’s Fair, 1964-65
“Everything is guaranteed authentic,” declared an article in The Daily News-Telegram of Sulphur Springs, Texas, “except for one detail—color. Both the Ford [13] and Sinclair dinosaurs are colored according to what scientists believe they actually were, but a Jonas spokesman admitted it’s really guesswork.”[14] Questions over accurate coloring did not dampen enthusiasm for the project. Once Jonas’s team completed its work, 30,000 locals showed up to give the larger-than-life sculptures a proper send-off. From Claverack, the dinosaurs floated 125 miles down the Hudson River on barges, arriving in the New York City harbor to equal fanfare in October 1963.[15] The Dinoland sculptures ranged in size, from the 6-foot-long Orintholestes to the 70-foot-long Brontosaurus. Each was carefully installed in a Mesozoic Era-landscape featuring lush greenery, water features, and a raging volcano. Dinoland merged education and entertainment, using Sinclair’s interest in dinosaurs to demonstrate the company’s commitment to scientific research and to create brand loyalty, both motivated in large part by profit. Architecture and culture critics openly questioned the heavy influence of corporate money at the New York World’s Fair, but to the event’s 50-plus million visitors, the memories created outweighed the commercialism.[16]
At Dinoland, men, women, and children transported themselves back in time, marveled at the scale of a pre-human world, and perhaps, even took a small plastic “Bronto” of their own home as a souvenir.[17] Those who visited the fair as children can still recall their excitement watching their mini Bronto take shape:
My most visceral memory, and it is very strong, is that of the Sinclair dinosaur machine, where you put coins in—I think it was 50 cents—and the green goop came down the pipes and was pressed between two halves of a dinosaur mold, through the glass, right in front of you. Then it came out the bottom like in any vending machine, still slightly warm. The green plastic smell was fabulous. I kept the dinosaur for years, mostly hoping to capture that smell, a cross between “new car” and gasoline.
Anne Yeager, 57, Bronxville, NY [18]
Could there have been any better souvenir from a petroleum company?
Fossilized Footprints: Discovery at Glen Rose
In its “Exciting World of Dinosaurs” Dinoland booklet, Sinclair asserted its scientific and industrial aptitude: “Even before dinosaurs lived, petroleum was forming in the earth during the Paleozoic Era some 230-million or more years ago. Today Sinclair geologists never cease to explore the world in search of long-hidden and precious petroleum crudes. Some Sinclair oil wells are drilled 3 miles deep, or more, to tap these ancient reserves.”[19] Only the latest technology was deployed upon extraction: “Today, Sinclair uses ultra-modern refining techniques to refine and transform these age-old crudes into top-quality Sinclair gasolines, Motor Oils, and other Petroleum Products…”[20]
The company also funded scientific research in the field of paleontology, further reinforcing the link between Sinclair Oil and dinosaurs. Excavations in Wyoming and Colorado in 1934 and 1937 received Sinclair money,[21] but it was one such-funded study in the small agricultural community of Glen Rose, Texas, that helped Roland T. Bird capture the discovery of a lifetime. A field explorer for the American Museum of Natural History and assistant to Barnum Brown, Bird travelled the country looking for fossils and bones for the institution. Acting on a tip he received from a clerk selling dinosaur tracks at store in New Mexico, Bird pointed his car east and arrived in Glen Rose in November 1938. Much to his delight, right in front of the courthouse was a perfectly preserved theropod track:
…my eyes caught sight of something that made me want to shout for joy. There, inserted in a bit of masonry not far from the door, was a large, three-toed dinosaur footprint. Its surface had been turned away from me, and I’d thought for an instant it was the usual fossilised [sic] log or stump one sometimes finds exhibited in places where fossils abound. But as I swung the Buick in to the curb it presented in all its outlines a faithful picture of such a track.It was a beauty, and there was no doubt that it was genuine. It was all of twenty inches of footprint perfection, made by a three-toed carnivore in mud which had faithfully preserved every minute detail. The satisfaction of seeing it was worth my extra miles; it clarified the worst half of an embarrassing problem, and gave promise of other things. A slab of such prints alone would be a fine addition to any museum collection. [22]
Bird inquired as to the track’s origin and learned such things were “long taken for granted in the community.” Glen Rose’s tracks date to the Cretaceous Period, 113 million years ago, when roaming dinosaurs left footprints in calcium-rich mud. The mud hardened and preserved the shape of their feet and claws. Layers of dirt and sediment covered up the tracks, which were slowly revealed by area rivers through millennia of erosion. Though the courthouse track had been extracted in 1933, local boy George Adams discovered tracks in a Paluxy River tributary in 1909. In 1917, Ellis Shuler of Southern Methodist University published “Dinosaur Tracks in the Glen Rose Limestone near Glen Rose, Texas,” in The American Journal of Science. Track hunting was a popular recreational activity, but it required some effort. Texan newlyweds Joe and Laurie Sanders road-tripped from New Braunfels to Glen Rose on their 1929 honeymoon, climbing through barbed wire and wading through tall weeds to snap photos of the gigantic fossilized footprints.[23]
Good Roads for Glen Rose
Glen Rose, county seat of Somervell County forty miles southwest of Fort Worth, is one of the state’s smallest counties at 188 square miles. The area counts two rivers, the Brazos and Paluxy, and three streams among its waterways.[24] Primarily an agricultural community in the late 19th and early 20th centuries, the town became a health resort in the 1920s thanks to the discovery of artesian mineral wells. Hoping to capitalize on a tourism boom, and in sync with the national Good Roads Movement, the county began to improve its roads and water crossings. Highway 68 received a concrete and steel bridge in 1923. By the time Roland T. Bird arrived, area thoroughfares were “above average” and easily navigable in his trusty Buick.[25]
Under the direction of the American Museum of Natural History's Roland T. Bird, a WPA-sponsored work crew excavated segments of fossilized dinosaur tracks in the Paluxy riverbed during the spring of 1940.
After meeting local farmer and landowner Jim Ryals, Bird asked to see track locations on his property. Ryals did not share Bird’s enthusiasm for Glen Rose’s dinosaurs. Previously, the farmer had cut a few tracks out of the dried riverbed, hoping to sell them for profit, but the amount of work involved did not bear the expected returns. Ryals described to Bird the types of tracks he had over the years, the number and location of which varied depending on what the river decided to expose. Most had been three-toed theropod tracks, like the specimen Bird encountered at the county courthouse. However, Ryals claimed to have seen tracks of a different shape, now buried under deep silt and gravel. As Bird began to investigate, he came across two trails of three-toed tracks, which he hoped to excavate for his museum’s new Jurassic Hall display. But, after digging into a nearby “pothole,” Bird uncovered something even he could not believe: sauropod tracks. These tracks were larger, rounder, and had four toes instead of three. He was beside himself: It was like uncovering a place where one of the pillars of Hercules might have stood. My emotions could not have been more stirred over a find of dinosaur eggs. It seemed like an hour, but it must have been less than a minute before my shovel grated bottom, and with a little careful sweeping out the thing was clean enough to be defined.[26]
Finding sauropod tracks was a hugely significant event. At the time, paleontologists debated whether such an animal could even move on land, hypothesizing such creatures instead lived in the ocean.[27] Local resident Charlie Moss had uncovered sauropod tracks in 1934, but, since he was not a trained paleontologist, he mistakenly believed they were elephant footprints. Bird’s experience paid off. He quickly made plaster casts of as many prints as he could, shipped them back east, and shared the news with his American Museum of Natural History colleague Barnum Brown and the local media. Word spread quickly through Glen Rose and reached major cities. The Dallas Morning News published an article on the discovery on November 30, 1938.[28]
Bird returned to Texas for additional work in the spring of 1940. This time funded partially by Texas’s State-wide Paleontological Survey, a Work Projects Administration (WPA) project, and partially by Sinclair Oil, he hired a crew of young male workers to build coffer dams and clean mud and silt from exposed tracks. The project generated a lot of interest, with many residents visiting the site to watch the action. For three months, the men exposed more and more dinosaur footprints, experiencing minor setbacks whenever the spring rains washed away days’ worth of excavation. Bird had promised a tracks to a number of institutions: the University of Texas at Austin, Southern Methodist University, Brooklyn College, and the Smithsonian. The slab delivered to Bird’s own museum measured 28 feet long. Satisfied he had secured some of the best fossilized footprints he had ever seen, Bird wrapped up the project in July, though the slab he unearthed for the American Museum of Natural History was not installed for several years.[29]
At Dinosaur Valley State Park, the Paluxy River slowly eroded the landscape to expose preserved dinosaur tracks imprinted millions of years ago
Texas Parks and Wildlife Department
“What will Glen Rose do with its Dinosaur Valley?”
As big city outsiders and professional paleontologists showed up in Glen Rose in search of tracks, locals began to reassess the value of their taken-for-granted resource. Ryals had not charged Bird access to his property, recognizing the scientific information garnered from the work benefited the public good, but other landowners capitalized on the opportunity, charging access to view tracks on their parcels. Some sold fossils they had unearthed to supplement their incomes, at times out of necessity. Vandalism became a concern, too, as word spread about the area’s prehistoric relics. Still, the community wished to promote its dinosaur discoveries by developing track sites into tourist destinations. Such a move would help boost the local economy and reap the benefits of a post-World War II America that now had the time and money to road trip.
Bird’s excavations brought Glen Rose to the American public through magazine articles, museum displays, and paleontological collections. By 1963, travel writer Ed Syers desired the American public come to Glen Rose, for there was nothing else like seeing the tracks in situ:
I’ve seen both Austin’s and New York’s excellent exhibits. At both, you are quite aware that is what they are—exhibits. Not so with these footsteps. You need pretend nothing. You know where you stand and where they walked. It is that knowing and looking that shakes your earth and turns your sky old. What will Glen Rose do with its Dinosaur Valley?[30]
The Somervell County Historical Society and Chamber of Commerce developed a “Dinosaur Trail” for motoring tourists. U.S. Congressman W.R. “Bob” Poage of Waco proposed a national scenic parkway. Such interventions would bring travelers, but they did little to protect the track sites themselves. In May 1966, the Whitaker family took the first step, offering the Chamber of Commerce a purchase option on their 347 acres, with the intent the land be bought by a private entity and donated to the state. The Texas Legislature created Dinosaur Valley State Park the following spring, though it came with no funding. As time on the purchase option nearly ran out, everything fell into place. The Texas Parks and Wildlife Department (TPWD) signed a contract for the land on September 27, 1968.[31]
A Dinosaur-sized Donation
By the time Dinosaur Valley State Park became a reality, the New York World’s Fair was four years in the past. Few of its structures were intended to last beyond the extravaganza, though some pavilions and attractions were reassembled in new locations. Dinoland’s exposition neighbor, the U.S. Royal Tire-shaped Ferris wheel, found a new home along a Detroit, Michigan, highway.[32] Hoping to prolong the popularity of its display (and to continue advertising its products), Sinclair took its reptiles on the road. Tyrannosaurus rex, brontosaurs, and their pals toured the nation, setting up in mall parking lots. Families unable to visit the Fair got the chance to witness at least one of its captivating displays. An estimated thirty-five million people in 100 cities welcomed the Sinclair dinosaur exhibit over 1966, 1967, and 1968. [33]
The Sinclair dinosaur tour stopped in San Antonio and Fort Worth, Texas. At the latter event at the Seminary South shopping center in September 1966, the Glen Rose Chamber of Commerce set up an information booth and handed out “Dinosaur Hunting Licenses,” eager to increase visitation to their town and garner support for its proposed state park. Seeing the large lizards in person, Chamber representatives wished to secure them at the end of their national tour, believing they would be a valuable addition to the site. Texas Governor Preston Smith echoed this petition, in written form and on a radio program in February and March 1970, respectively. Smith’s timing proved better. The spring before, a merger between Sinclair Oil and the Atlantic Richfield Company (ARCO) forced the green brontosaurus logo into retirement. Sinclair no longer needed its life-size dinosaurs, and the Governor’s request for the popular prehistoric statues, along with several others, filled ARCO company mailboxes.
Texas Parks and Wildlife Department officials met with the Governor’s Office in May, where they learned the agency, thanks to Smith’s efforts, would indeed receive two ARCO/Sinclair dinosaurs for its still-in-development park in Glen Rose. Supposedly the oil company promised the entire Dinoland cast to TPWD, but following the deluge of requests, could only promise a partial set. Details came later that summer: “2 full-scale dinosaurs (one Brontosaurus and one Tyrannosaurs rex), one bird, one or two specimens of baby Brontosaurs, and one or two eggs” would arrive at Dinosaur Valley State Park on three tractor trailers. The statues had been mounted to the trailers during their touring days and would need to be cut free. TPWD would also need to supply a hydraulic crane to life them into place.[34]
Dinosaur Valley State Park staff install Rex (L) and Bronto (R), two former residents of Sinclair Oil’s Dinoland and part of the company’s popular travelling dinosaur exhibit.
Tuesday, July 14, 1970, marked one of the more interesting days in TPWD history, for it was the day dinosaurs returned to Glen Rose. Dinosaur Valley State Park superintendent Lester Galbreath and staff, having no experience installing such large sculptures, did their best to safely welcome the creatures, but unfortunately, T. rex’s tail cracked when the sculpture’s lower half took an unexpected tumble. Brontosaurus was installed without issue. Sculptor Louis Paul Jonas flew to Glen Rose the next month to repair the tail, surely happy his most-recognized pieces would not go unseen in storage after all.[35] ARCO company representatives were pleased, too. The donation made for a good press release. Frank Sorgorka, the appointed “Father of the Sinclair Dinosaur collection,” wrote:
Our company is very happy to have made the Dinosaurs available to the people of Texas for inclusion in your very fine Dinosaur State Park which is now in the process of completion. We hope that the school children of your great State and the surrounding area will derive educational value and enjoyment out of the exhibit.[36]
Educational, and financially-beneficial, as ARCO could write-off the gift on its tax returns, the company requested from TPWD a formal receipt totaling $94,344.[37]
Dinosaur Valley State Park opened to the public on October 2, 1970. Governor Smith arrived via helicopter to formally accept the ARCO/Sinclair dinosaur models. Roland T. Bird returned to Glen Rose for first time since his dig, the discovery that set everything into motion thirty years earlier. He spent several hours walking along the Paluxy Riverbed reminiscing. The ribbon cutting brought together a diverse group of stakeholders—state officials, oil company executives, paleontologists, local boosters—who gathered with a crowd of 500 in the shadow of park’s two new prehistoric beasts. As for the additional cargo (one baby Bronto, egg, and “bird” (Archaeopteryx)), the park had no place to house them, so the little Bronto went to Oakdale Park in Glen Rose. The egg appears to have been in the possession of TPWD’s Exhibit Shop before becoming part of an exhibit at the University of Wisconsin at Madison’s Geology Museum in November 1976.[38] As of this writing, Archeopteryx’s whereabouts remain unknown. The remainder of Sinclair Oil’s Dinoland figures found new homes in parks, zoos, and other cultural institutions.[39]
During the early days of Dinosaur Valley State Park, its life-size statues, named Rex and Bronto, were the only features greeting visitors on their way to the tracks. Decidedly the most unusual residents of the Texas State Parks system, the pair tested park staff from the beginning with their special maintenance needs. After initially fixing an installation mishap, Louis Paul Jonas Studios returned to Glen Rose in 1974 to complete $10,000 worth of general repairs, including joint realignment and exterior refinishing. Two years later, T. rex’s troublesome tail once again needed work. TPWD staff used “Bondo,” a fiberglass automotive product to repair the damage, but had to guess at the correct paint colors to use, as Jonas Studios had not shared any specifications. By 1984, the brontosaurus tail showed significant deterioration and was missing its tip. Rex had 18 fewer teeth, and both dinos had warping seams and cracks in their skin. This more extensive work totaled $19,150. [40] More recently, Rex received a new coat of paint in 2015, but Bronto’s circa 2010 color has faded, a job estimated to cost $30,000.[41]
Dino-tourism and Creationism
Rex and Bronto are a big draw for the area’s dinosaur-based tourism industry, which in 2012 brought in $23 million. According to Bill Huckaby, former director of the Glen Rose Convention and Visitors Bureau, “The dinosaurs are what drive us. You can’t develop a town of 2,000 into this kind of tourism revenue unless you’ve got something really special to promote.” [42] Mirroring a long-standing public fascination with dinosaurs, the duo are one of countless roadside attractions featuring dinosaurs throughout the United States. Some seek to recreate the thrill of discovery on a paleontological dig. Dinosaur World, also located in Glen Rose, is home to 150 life-size ancient reptiles in outdoor displays. Select ticket prices include an “excavation pass,” a chance for younger visitors to hunt for “dino gems” and fossils, an activity that is not allowed at the state park, in a controlled, pre-stocked pit.[43]
Other dinosaur-themed parks capitalize on a certain kitsch. Virginia’s Dinosaur Land beasts, created around the same time as the 1964-1965 New York World’s Fair, at times appear weathered, but these dated models delight visitors with fantastic scenes of “dino-on-dino violence.” Bonus displays of other enormous beasts, including a 70-foot-long purple octopus and King Kong, introduce an element of fantasy. [44] Dinosaur Kingdom II, also in Virginia, is even more imaginative. Its outdoor exhibits pit the reptiles against Civil War soldiers in a goofy tale of time travel, mad science, and buried treasure. [45] At Cascade Caverns outside Boerne, Texas, a green T. rex (donated by Walt Disney studios following a movie filming in the area)[46] greets visitors for seemingly little reason other than its novelty. In Texas and beyond, life-size dinosaurs remain an undeniably popular roadside attraction. Even Dinosaur Valley State Park’s beasts share in this appeal, the figures themselves a unique combination of art, advertising, entertainment, education, and nostalgia.
As ready-made photo ops, Rex and Bronto add an element of fun to the park, but scientific analysis of the site’s world-class, priceless prehistoric resources remains an interpretive priority. Static exhibits and ranger-led tours of the tracks teach basic paleontological concepts and emphasize the educational value of the site. Such explanations do not always satisfy visitors’ curiosity, however, as the tracks regularly prompt questions from those whose religious faith offers a different viewpoint. Some proponents of creationism, or intelligent design, assert that dinosaurs and humans lived contemporaneously. Oddly-shaped depressions in local river and creek beds can appear to lend support to such claims, though scientists dispute the idea. The area’s cache of dinosaur tracks attracted Carl Edward Baugh, a self-proclaimed dinosaur “discoverer” and televangelist, who in 1984 founded the Creation Evidence Museum of Texas, located two miles from the state park. [47] The town is also home to “The Promise,” an award-winning outdoor play detailing the life of Jesus Christ in the state’s largest permanent amphitheater. In Glen Rose, entertainment and education, science and religion, two pairs of seemingly contradictory interests, literally exist side-by-side.
No matter your persuasion regarding geological time and the prehistoric era, public perceptions of dinosaurs are derived from figures like Rex and Bronto, who help young and old alike picture a long-ago world. Admittedly, the duo is at best a well-researched and –intentioned projection, but Dinosaur Valley’s unofficial park hosts are more than an Instagram-worthy selfie backdrop. Now over 50 years of age, the dinosaurs are eligible for inclusion in the National Register of Historic Places. Vitally important to the local economy, they attract millions of visitors (and their dollars) to a relatively small-but-growing Texas community and frame discussions about science and faith. It is fitting that two dinosaurs purchased with Sinclair Oil money continue to welcome motoring tourists to Dinosaur Valley State Park, just as they greeted World’s Fair-goers in New York City several decades earlier. Yet Rex and Bronto, Texas State Parks’ most anomalous “residents,” pose significant preservation and interpretive challenges: the maintenance of life-size ancient beasts now historic in their own right, the juxtaposition of roadside kitsch along scientific inquiry, and the continuing debate between evolution and creationism.
Endnotes
Paleontologists today use the name Apatosaurus instead of Brontosaurus. To avoid confusing the reader when discussing historical events, this paper will use the latter. “Dino, An American Icon,” Sinclair Oil, accessed February 6, 2018, https://www.sinclairoil.com/dino-history
Lawrence Samuel, The End of Innocence: The 1964-1965 New York World’s Fair (Syracuse: Syracuse University Press, 2007), 174.
Liz Robbins, “Recalling a Vision of the Future,” New York Times, April 18, 2014, https://www.nytimes.com/interac-tive/2014/04/20/nyregion/worlds-fair-1964-memories. html.
Three of the nine Dinoland figures (Tyrannosaurus rex, Brontosaurus, and Stegosaurus) had mechanized parts. Brontosaurus had two interchangeable necks, one moving and one static, which were swapped during the winter and summer. All animatronic features were removed from the Bronto and Rex before the pair arrived in Glen Rose, Texas. “Dinosaur Fever – Sinclair’s Icon.” American Oil and Gas Historical Society, accessed January 30, 2018, https://aoghs. org/oil-almanac/sinclair-dinosaur/; “Dino: The Sinclair Oil Dinosaur Fact Sheet,” Sinclair Oil, April 21, 2016, https://www.sinclairoil.com/sites/default/files/Sinclair-Oil-DI-NO-Fact-Sheet.pdf; “Dinosaurs ‘Live Again’ in World’s Fair Show,” The San Antonio Light, April 25, 1965; “Sinclair at nywf64.com, 1964 & 1965 Official Guidebook Entries,” accessed January 15, 2018, http://nywf64.com/sinclair01.shtml.
Exactly how much Paul earned for his dinosaur models is unclear from available print publications, which state the artist either received a $250,000 commission or secured a $1 million contract. Henry B. Comstock, “Getting There Is Half the Fun,” Popular Science, September 1963, 50-53; John G. Rogers, “Watch Out for Brontosaurus When Driving in Catskills,” The Citizen-Advertiser, August 26, 1963; “Travel Back in Time with Dino - 1960,” Sinclair Oil, accessed January 16, 2018, https://www.sinclairoil.com/history/1960. html; Sinclair Oil, “Dino: The Sinclair Oil Dinosaur Fact Sheet.”
Dick Kleiner, “Dinosaurs, Long Extinct, Make Comeback at Fair,” The Daily News-Telegram, May 12, 1964.
David W. Dunlap, “World’s Fair Showed a Different Side of the Port Authority.” New York Times, April 16, 2014, https://www.nytimes.com/2014/04/17/nyregion/worlds-fair-brought-out-port-authoritys-whimsical-side.html; “Dinosaurs ‘Live Again’ in World’s Fair Show,” The San Antonio Light; Comstock, “Getting There is Half the Fun;” Bulkeley, “Dinosaur Parade;” Sinclair Oil, “Dino: The Sin-clair Oil Dinosaur Fact Sheet.”
“A Royal Legacy,” New York World’s Fair 1964-1965, accessed February 5, 2018, http://nywf64.com/usrub08.sht-ml; Mark Brush, “Here’s what it’s like inside and on top of the Giant Uniroyal Tire,” Michigan Radio, May 22, 2015, http://michiganradio.org/post/heres-what-its-inside-and-top-giant-uniroyal-tire.
As of this writing, Sinclair’s Dinoland figures reside at the following locations: Triceratops at Museum of Science & Industry in Louisville, Kentucky; Stegosaurus at Dinosaur National Monument in Jensen, Utah; Corythosaurus in Independence, Kansas; Ankylosaurus at the Houston Museum of Natural Science, Texas; Struthiomimus at the Milwaukee Public Museum, Wisconsin; Trachodon at the Brookfield Zoo in Brookfield, Illinois. The Ornitholestes was stolen and never recovered. However, copies from the original mold have been displayed in New Jersey and Calgary, Alberta. Sinclair Oil, “Travel Back in Time with Dino – 1960.” | Vitally important to the local economy, they attract millions of visitors (and their dollars) to a relatively small-but-growing Texas community and frame discussions about science and faith. It is fitting that two dinosaurs purchased with Sinclair Oil money continue to welcome motoring tourists to Dinosaur Valley State Park, just as they greeted World’s Fair-goers in New York City several decades earlier. Yet Rex and Bronto, Texas State Parks’ most anomalous “residents,” pose significant preservation and interpretive challenges: the maintenance of life-size ancient beasts now historic in their own right, the juxtaposition of roadside kitsch along scientific inquiry, and the continuing debate between evolution and creationism.
Endnotes
Paleontologists today use the name Apatosaurus instead of Brontosaurus. To avoid confusing the reader when discussing historical events, this paper will use the latter. “Dino, An American Icon,” Sinclair Oil, accessed February 6, 2018, https://www.sinclairoil.com/dino-history
Lawrence Samuel, The End of Innocence: The 1964-1965 New York World’s Fair (Syracuse: Syracuse University Press, 2007), 174.
Liz Robbins, “Recalling a Vision of the Future,” New York Times, April 18, 2014, https://www.nytimes.com/interac-tive/2014/04/20/nyregion/worlds-fair-1964-memories. html.
Three of the nine Dinoland figures (Tyrannosaurus rex, Brontosaurus, and Stegosaurus) had mechanized parts. Brontosaurus had two interchangeable necks, one moving and one static, which were swapped during the winter and summer. All animatronic features were removed from the Bronto and Rex before the pair arrived in Glen Rose, Texas. “Dinosaur Fever – Sinclair’s Icon.” | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | yes_statement | the "brontosaurus" and the apatosaurus were the same "dinosaur".. the "brontosaurus" and the apatosaurus are interchangeable names for the same "dinosaur". | https://www.dictionary.com/browse/brontosaurus | Brontosaurus Definition & Meaning | Dictionary.com | Scientific definitions for brontosaurus
word history
Take a little deception, add a little excitement, stir them with a century-long mistake, and you have the mystery of the brontosaurus. Specifically, you have the mystery of its name. For 100 years this 70-foot-long, 30-ton vegetarian giant had two names. This case of double identity began in 1877, when bones of a large dinosaur were discovered. The creature was dubbed apatosaurus, a name that meant deceptive lizard or unreal lizard. Two years later, bones of a larger dinosaur were found, and in all the excitement, scientists named it brontosaurus or thunder lizard. This name stuck until scientists decided it was all a mistake-the two sets of bones actually belonged to the same type of dinosaur. Since it is a rule in taxonomy that the first name given to a newly discovered organism is the one that must be used, scientists have had to use the term apatosaurus. But thunder lizard had found a lot of popular appeal, and many people still prefer to call the beast brontosaurus.
Cultural definitions for Brontosaurus
A large herbivorous (seeherbivore) dinosaur, perhaps the most familiar of the dinosaurs. The scientific name has recently been changed to Apatosaurus, but Brontosaurus is still used popularly. The word is from the Greek, meaning “thunder lizard.” | Scientific definitions for brontosaurus
word history
Take a little deception, add a little excitement, stir them with a century-long mistake, and you have the mystery of the brontosaurus. Specifically, you have the mystery of its name. For 100 years this 70-foot-long, 30-ton vegetarian giant had two names. This case of double identity began in 1877, when bones of a large dinosaur were discovered. The creature was dubbed apatosaurus, a name that meant deceptive lizard or unreal lizard. Two years later, bones of a larger dinosaur were found, and in all the excitement, scientists named it brontosaurus or thunder lizard. This name stuck until scientists decided it was all a mistake-the two sets of bones actually belonged to the same type of dinosaur. Since it is a rule in taxonomy that the first name given to a newly discovered organism is the one that must be used, scientists have had to use the term apatosaurus. But thunder lizard had found a lot of popular appeal, and many people still prefer to call the beast brontosaurus.
Cultural definitions for Brontosaurus
A large herbivorous (seeherbivore) dinosaur, perhaps the most familiar of the dinosaurs. The scientific name has recently been changed to Apatosaurus, but Brontosaurus is still used popularly. The word is from the Greek, meaning “thunder lizard.” | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | yes_statement | the "brontosaurus" and the apatosaurus were the same "dinosaur".. the "brontosaurus" and the apatosaurus are interchangeable names for the same "dinosaur". | https://planetpailly.com/tag/dinosaurs/ | Dinosaurs – Planet Pailly | Tag: Dinosaurs
Hello, friends! Welcome back to Sciency Words, a special series here on Planet Pailly where we take a closer look at interesting and new scientific terms in order to expand our scientific vocabularies together! Today’s Sciency Word is:
PRESERVATION BIAS
So is there life on Mars? Well, there could be. It’s not totally impossible. But as I’ve said before on this blog, I think the odds of us finding living things on Mars are pretty low. The odds of us finding dead things on Mars, however… I think those odds are much better!
Or at least I did think that until I read this paper, entitled “A Field Guide to Finding Fossils on Mars.” That paper introduced me to the concept of “preservation potential,” and subsequent research led me to learn about something paleontologists call “preservation bias.”
Basically, turning into a fossil isn’t easy. A lot of factors have to come together just right in order for a dead organism to become preserved in the fossil record. As that Martian fossil field guide explains:
On Earth, most organisms fail to fossilize because their remains are physically destroyed, chemically oxidized or dissolved, digested by their own enzymes, or consumed by other organisms. Fossilization only occurs when processes of preservation outpace degradation.
Preservation bias refers to the fact that certain organisms—or certain parts of certain organisms—stand a better chance of fossilizing than others. Preservation bias can also refer to the fact that some environments (rivers and lakes, for example) do a better job creating and preserving fossils than others (for example, deserts).
A lot of factors can get involved in this, but as a quick and easy example, think of the dinosaurs. Dinosaur bones fossilize easily enough. Other parts of the dinosaur… not so much. That, my friends, is preservation bias at work, favoring hard tissue, like bone, over soft tissue, like muscle or fat.
Now imagine what would have happened if dinosaurs somehow evolved without bones (that’s a weird concept, I know, but just bear with me a moment). How much would we know about those boneless dinosaurs today? Would we know about them at all? Those hypothetical boneless dinosaurs could have roamed the earth for billions of years and left hardly a trace of evidence for us modern humans to find!
Which brings us back to Mars. There was a time, very long ago, when Mars was a much warmer and wetter planet than he is today. It’s possible—no, I’d say it’s probable!—that life of some kind developed on ancient Mars, just as it did on ancient Earth. But would that ancient Martian life have left us any fossils to find? Maybe. Maybe not. It depends on the various factors involved in preservation bias.
There’s an important science fact that I wish more people were aware of. Birds are not merely the descendants of dinosaurs. According to a taxonomic system called cladistics (also known as phylogenetic systematics), birds are dinosaurs. To quote this article from DinoBuzz:
Using proper terminology, birds are avian dinosaurs; other dinosaurs are non-avian dinosaurs, and (strange as it may sound) birds are technically considered reptiles. Overly technical? Just semantics? Perhaps, but still good science.
So with that in mind, the following statements are 100% true:
I often wake up to the sound of noisy dinosaurs outside my window.
I sometimes see dinosaurs swimming in the river near my house.
I hate it when dinosaurs poop on my car.
I enjoy eating dinosaur meat. Sometimes I put dinosaur meat on sandwiches or in salads.
Anyway, what sort of experiences have you had with dinosaurs in your daily life? Please share in the comments!
P.S.: Have you seen those dinosaur-shaped chicken nuggets in the grocery store? They’re cute. I’m just not convinced that they’re made from 100% real dinosaur meat.
Sciency Words: (proper noun) a special series here on Planet Pailly focusing on the definitions and etymologies of science or science-related terms. Today’s Sciency Word is:
THE SILURIAN HYPOTHESIS
I’ve heard several variations on this joke. Why did the dinosaurs go extinct? Because they didn’t put enough money into their space program.
But what if that isn’t a joke? What if the dinosaurs (or some other prehistoric creatures) did establish an advanced civilization right here on Earth millions of years before we came along? Could such a civilization come and go without leaving any trace for us modern humans to find? Or could the traces be there for us to see, and we just haven’t recognized them yet?
In 2018, NASA astrobiologists Gavin Schmidt and Adam Frank presented this idea in a formal scientific paper titled “The Silurian Hypothesis: Would it be possible to detect an industrial civilization in the geological record?” As Schmidt and Frank explain in a footnote:
We name the hypothesis after a 1970 episode of the British science fiction TV series Doctor Who where a long buried race of intelligent reptiles “Silurians” are awakened by an experimental nuclear reactor. We are not however suggesting that intelligent reptiles actually existed in the Silurian age, nor that experimental nuclear physics is liable to wake them from hibernation.
Schmidt and Frank go on to examine some of the ways human industrial activities have changed this planet, and how those changes are being recorded geologically. They also examine a few of the oddities and anomalies in the geological record as we currently know it.
To be clear, there is absolutely no definitive evidence that another advanced civilization existed on Earth before our own. Schmidt and Frank go to great pains to emphasize that they don’t actually believe their own hypothesis to be true.
The Silurian Hypothesis is intended to be more of a thought experiment than anything else. It’s meant to help us better understand how human civilization is changing this planet, and also (remember Schmidt and Frank are NASA astrobiologists) how alien civilizations might be changing their own worlds.
P.S.: The Silurian Hypothesis is also a wonderful example of how science fiction can inspire real life science.
I got a little bit behind on my research this week, so I don’t have anything prepared for this week’s episode of Sciency Words. However, I recently stumbled upon this video which seems thematically appropriate in relation to the Sciency Words series.
It’s a TED Talk with Jack Horner, the world famous paleontologist who discovered Maiasaura and demonstrated that some dinosaur species did, in fact, take care for their young. If you remember Alan Grant from the original Jurassic Park, Jack Horner served as the real life inspiration for that character.
The TED Talk is about how dinosaurs get their names and how that naming process has led to some pretty glaring scientific mistakes.
Sciency Words is mainly a series about science, but it’s also about linguistics and the philosophy of language. Words have power. They shape our thoughts, and they can change the way we understand and experience the world. And as Jack Horner’s TED Talk illustrates, if we’re careless about the words we choose to use, then our words can mislead us, and we can end up blinding ourselves to things that should be obvious.
Today’s post is part of a special series here on Planet Pailly called Sciency Words. Each week, we take a closer look at an interesting science or science-related term to help us expand our scientific vocabularies together. Today’s term is:
DINOSAUR
I did a Sciency Words post on the word dinosaur before, when I participated in last year’s A to Z Challenge, but I never felt satisfied with that post. For one thing, I missed a golden opportunity to tell you one of the coolest sciency things I’ve learned: dinosaurs are not extinct.
Or rather, to be more technical about it, dinosaurs are or are not extinct depending on how you define the word dinosaur. You see we have two different systems for classifying life: the traditional Linnaean system and an alternative system called cladistics.
In 1735, Swedish botanist Carl Linnaeus published his book Systema Naturae, introducing the world to his system of binomial nomenclature. All of a sudden, we humans became Homo sapiens, our cats became Felis catus, and so forth. But under Linnaeus’s system, plants and animals (and also minerals) had to be classified purely according to their physical characteristics, not their evolutionary heritage. Darwin’s On the Origin of Species wouldn’t be published for another 124 years.
Then in the 1950’s, German entomologist Willi Hennig introduced a new and improved system which he called phylogenetic systematics, but which has since been renamed cladistics. A “clade,” in cladistics, is a group of animals that share a common ancestor, and if one animal is part of any given clade, then all of that animal’s descendents are part of that clade too, according to Hennig’s system.
Both of these systems are still in use today. As this article from Ask a Biologist explains, “[Claudistics] is useful for understanding the relationships between animals, while the Linnaean system is more useful for understanding how animals live.”
So because birds evolved from dinosaurs, birds are dinosaurs, claudistically speaking. Birds are like a subcategory of dinosaur. And thus the dinosaurs are still here, strutting and flapping about on this planet.
Today’s post is a special A to Z Challenge edition of Sciency Words, an ongoing series here on Planet Pailly where we take a look at some interesting science or science related term so we can all expand our scientific vocabularies together. In today’s post, T is for:
THAGOMIZER
Once upon a time, there was a caveman by the name of Thag Simmons. According to Gary Larson’s Far Side comic strip, Thag met an unfortunate end when he was clubbed to death by the spiky tail of a stegosaurus. As a result, cavemen began to call the stegosaur’s spiky tail a thagomizer.
Larson’s thagomizer comic was originally published in 1982. Death by thagomizer may sound like a gruesome way to go, but based on my own research about cavemen/dinosaur interactions, I have reason to believe Thag Simmons had it coming.
Over the course of this Sciency Words: A to Z series, we’ve seen some scientific terms that were pretty clever, and some that were kind of dumb, and a few that are truly misleading. But until now, we haven’t talked about scientific terms that come from pop culture.
In 1993, paleontologist Ken Carpenter was making a presentation about the most complete stegosaurus skeleton ever found, and he needed a term for the dinosaur’s distinctive spiky tail. One of the spikes had apparently broken and healed, which was compelling evidence that stegosaurs really did use their spiky tails as weapons.
In homage to Gary Larson, Carpenter chose the term thagomizer, and the term has now become the proper, semi-official term for that part of stegosaurus anatomy.
I want to thank @breakerofthings for letting me know about this term. I’d originally planned to do something else for T, but I think this was much better. Be sure to check out A Back of the Envelope Calculation, where “breaker” is doing an A to Z Challenge series on materials science in Sci-Fi/Fantasy.
Today’s post is a special A to Z Challenge edition of Sciency Words, an ongoing series here on Planet Pailly where we take a look at some interesting science or science related term so we can all expand our scientific vocabularies together. In today’s post, D is for:
DINOSAUR
When I was a kid, dimetrodon was my favorite dinosaur. It has a sail on its back. How cool is that?
Then I found out that dimetrodon is not a dinosaur. It’s just a lizard. Then I found out from this video that it’s not even a lizard.
So today, I thought I’d give you a quick tip on how to tell when a “dinosaur” is actually not a dinosaur. Sciency Words is all about defining scientific terms, and paleontologists use several key features to define what is or isn’t a dinosaur. For example: the number of openings in the skull, the shape of the hip bone, the type of joint at the ankle….
If you’re a professional dinosaur scientist, you need to know this stuff. But for the rest of us, the easiest way to tell (in my opinion) is by looking at the orientation of the legs. Dinosaur legs are vertical to the ground, not horizontal. They go straight up and down, rather than being splayed out to the sides.
So if you think it’s a dinosaur, but the legs are splayed apart, it’s not a dinosaur.
If you’ve ever seen a crocodile or salamander try to run, you can understand why having your legs splayed apart like that is a disadvantage.
Standing upright on their vertical legs, dinosaurs had a much easier time walking and running on land. Also, vertical legs can support more weight, allowing dinosaurs to become much bigger and much heavier than their cousins, the amphibians, reptiles, and whatever the heck dimetrodons were.
Next time on Sciency Words: A to Z Challenge, we’ll find out what our planet’s name is.
Today’s post is a special A to Z Challenge edition of Sciency Words, an ongoing series here on Planet Pailly where we take a look at some interesting science or science related term so we can all expand our scientific vocabularies together. In today’s post, B is for:
BRONTOSAURUS
I’ll never forget that sad moment in my childhood when I found out that brontosaurus is not a real dinosaur. Someone made a mistake, and we had to call brontosaurus apatosaurus instead.
Here’s a quick rundown of events in the brontosaur/apatosaur naming controversy:
1877: A dinosaur skeleton is discovered and given the name scientific name (genus and species) Apatosaurus ajax.
1879: Another dinosaur skeleton is discovered and given the name Brontosaurus excelsus.
1903: Upon further examination, it’s determined that these two dinosaur specimens are too closely related and should be classified as the same genus. Since the genus Apatosaurus was identified first, Brontosaurus excelsus became Apatosaurus excelsus.
On a personal note, I was stunned to find out all this happened way back in 1903. When I was a kid, I was under the impression that this was a much more recent development.
Anyway, there’s some good news for brontosaurus fans. In 2015, Brontosaurus was reinstated as its own genus. Turns out that while those two skeletons are very similar, there’s enough of a difference in the structure of the neck to justify classifying them separately.
By the way, brontosaurus means “thunder lizard,” because of the sound it must have made when it walked. Apatosaurus apparently means “deceptive lizard.” I’m not sure why they called it that back in 1877, but after this case of attempted identity theft, I’d say the name fits.
Next time on Sciency Words: A to Z Challenge, we’ll head out into space and meet some centaurs.
Today’s post is part of a special series here on Planet Pailly called Sciency Words. Each week, we take a closer look at an interesting science or science-related term to help us all expand our scientific vocabularies together. Today’s term is:
THE K-T EVENT
You already know this story. It was 65 million years ago. There were dinosaurs, there was an asteroid…
It’s easily the most famous asteroid impact in Earth’s history, and it’s called the K-T Event, or sometimes the K-Pg Event.
In geology shorthand, the letters stand for:
K: the Cretaceous period, which is spelled with a K in German. This was the last period of geological history in which dinosaurs roamed the Earth.
T: the Tertiary period, which immediately followed the Cretaceous. According to the International Commission on Stratigraphy (ICS), we’re not supposed to use this name anymore, but people still do. It’s sort of like how some people keep calling Pluto a planet, no matter what the International Astronomy Union (IAU) says.
Pg: the Paleogene period, which is the period immediately following the Cretaceous according to the ICS’s new list of geological periods. Please note, the Tertiary and Paleogene are not really interchangeable terms. They have the same starting point, but different end points.
Geologists and paleontologists puzzled for decades over a layer of clay separating Cretaceous and Tertiary (or Paleogene) rock. They called it the K-T boundary. There were several competing hypotheses about what might have caused this boundary and how it related to the mass extinction event that killed off the dinosaurs.
Platinum group metals like iridium are extremely rare on Earth (except in the planet’s core) but common in asteroids. So whenever you find lots of iridium in Earth’s crust, you can justifiably assume an asteroid put it there.
The most likely scenario is that a large asteroid, about 10 km in diameter, smashed into Earth, flinging dust and debris high into Earth’s atmosphere. Enough to block out the sun worldwide for several years. This global dust cloud would have included plenty of material from the asteroid itself, which would have been partially vaporized by the heat of the impact.
A major problem with the original 1980 paper was that, at the time, no known impact crater of the appropriate age was sufficiently large. But of course, that was back in 1980. The crater has since been found in the Yucatan Peninsula, and now just about everybody knows the story of the K-T Event (even if they don’t know it’s called that).
P.S.: The K-T Event is not to be confused with the Katie Event. You know, that time your BFF Katie had waaaaay too much to drink and threw a temper tantrum of apocalyptic proportions.
Addendum: While there does seem to be general, widespread consensus that the K-T asteroid impact either caused the extinction of the dinosaurs or contributed significantly to their demise, there is not universal agreement. As Planetary Defense Commander notes in the comments, there are other possibilities worth considering. | In today’s post, B is for:
BRONTOSAURUS
I’ll never forget that sad moment in my childhood when I found out that brontosaurus is not a real dinosaur. Someone made a mistake, and we had to call brontosaurus apatosaurus instead.
Here’s a quick rundown of events in the brontosaur/apatosaur naming controversy:
1877: A dinosaur skeleton is discovered and given the name scientific name (genus and species) Apatosaurus ajax.
1879: Another dinosaur skeleton is discovered and given the name Brontosaurus excelsus.
1903: Upon further examination, it’s determined that these two dinosaur specimens are too closely related and should be classified as the same genus. Since the genus Apatosaurus was identified first, Brontosaurus excelsus became Apatosaurus excelsus.
On a personal note, I was stunned to find out all this happened way back in 1903. When I was a kid, I was under the impression that this was a much more recent development.
Anyway, there’s some good news for brontosaurus fans. In 2015, Brontosaurus was reinstated as its own genus. Turns out that while those two skeletons are very similar, there’s enough of a difference in the structure of the neck to justify classifying them separately.
By the way, brontosaurus means “thunder lizard,” because of the sound it must have made when it walked. Apatosaurus apparently means “deceptive lizard.” I’m not sure why they called it that back in 1877, but after this case of attempted identity theft, I’d say the name fits.
Next time on Sciency Words: A to Z Challenge, we’ll head out into space and meet some centaurs.
Today’s post is part of a special series here on Planet Pailly called Sciency Words. | no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://www.amnh.org/explore/news-blogs/on-exhibit-posts/brontosaurus-apatosaurus | Apatosaurus: Don't Call It Brontosaurus! | AMNH | Brontosaurus or Apatosaurus?
It’s one of the most recognizable dinosaur species, yet most people know it by a name most paleontologists stopped using more than a century ago: brontosaurus.
One of the most iconic specimens of this massive animal is on display in the Hall of Saurischian Dinosaurs, the first sauropod—a species belonging to the group of massive, herbivorous, long-tailed dinosaurs—to be mounted and displayed at the Museum.
By the time that Museum paleontologist Walter Granger discovered a large set of fossilized bones at Wyoming’s Como Bluff in 1898, O. C. Marsh of Yale University had already characterized what he believed to be two distinct sauropod species: Apatosaurus and Brontosaurus. Granger believed his findings, catalogued as AMNH 460, to be Brontosaurus.
The Museum’s “Brontosaurus” took six years to mount and used four different specimens collected from Como Bluff by Granger and other Museum paleontologists. Casts from Marsh’s Apatosaurus were used to fill in the missing bones, and since Granger and his team did not find a head with their specimen, they gave it a sculpted head of another sauropod, Camarasaurus. Figuring out how to support the large skeleton was another challenge, since no specialized materials existed for this purpose. The mount was later revised to be more anatomically accurate, but the original framework—which consists of repurposed pipes and plumbing fixtures—still supports the dinosaur’s torso.
By the time the mount was complete in 1905, the dinosaur’s name had been officially changed. In 1903, Elmer Riggs, a paleontologist from the Chicago’s Field Museum of Natural History made the case that Apatosaurus was actually a juvenile Brontosaurus, and that the two names actually referred to the same species. The name given to the first specimen of the species to be discovered, Apatosaurus, became the accepted scientific name; Brontosaurus became invalid, or, at best, considered a redundancy—even though for most people, Brontosaurus remained the best-known name for the popular dinosaur.
Nearly a century after it was first mounted, the Museum’s Apatosaurus underwent another major revision. During the renovation of the fossil halls in the 1990s, in a project that took three preparators a year to complete, four neck vertebrae were added and the tail was extended and lifted off the ground to reflect the lack of evidence for a dragging tail in preserved sauropod tracks.
The Apatosaurus also received a new head to replace the Camarasaurus skull used by Granger in 1905. Earl Douglass, of Pittsburgh’s Carnegie Museum found an Apatosaurus skeleton with a detached skull nearby a few years after the Apatosaurus was first mounted, but for decades paleontologists disagreed over whether the skull belonged with the body. During the renovation of the fossil halls, the Museum replaced the Camarasaurus head with a cast of the skull from the Carnegie Museum. | Brontosaurus or Apatosaurus?
It’s one of the most recognizable dinosaur species, yet most people know it by a name most paleontologists stopped using more than a century ago: brontosaurus.
One of the most iconic specimens of this massive animal is on display in the Hall of Saurischian Dinosaurs, the first sauropod—a species belonging to the group of massive, herbivorous, long-tailed dinosaurs—to be mounted and displayed at the Museum.
By the time that Museum paleontologist Walter Granger discovered a large set of fossilized bones at Wyoming’s Como Bluff in 1898, O. C. Marsh of Yale University had already characterized what he believed to be two distinct sauropod species: Apatosaurus and Brontosaurus. Granger believed his findings, catalogued as AMNH 460, to be Brontosaurus.
The Museum’s “Brontosaurus” took six years to mount and used four different specimens collected from Como Bluff by Granger and other Museum paleontologists. Casts from Marsh’s Apatosaurus were used to fill in the missing bones, and since Granger and his team did not find a head with their specimen, they gave it a sculpted head of another sauropod, Camarasaurus. Figuring out how to support the large skeleton was another challenge, since no specialized materials existed for this purpose. The mount was later revised to be more anatomically accurate, but the original framework—which consists of repurposed pipes and plumbing fixtures—still supports the dinosaur’s torso.
By the time the mount was complete in 1905, the dinosaur’s name had been officially changed. In 1903, Elmer Riggs, a paleontologist from the Chicago’s Field Museum of Natural History made the case that Apatosaurus was actually a juvenile Brontosaurus, and that the two names actually referred to the same species. The name given to the first specimen of the species to be discovered, Apatosaurus, became the accepted scientific name; Brontosaurus became invalid, or, at best, considered a redundancy— | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://a-z-animals.com/blog/apatosaurus-vs-brontosaurus/ | Apatosaurus vs Brontosaurus: Is There a Difference? - AZ Animals | Apatosaurus vs Brontosaurus: Is There a Difference?
WATCH: Sharks biting alligators, the most epic lion battles, and MUCH more.
Enter your email in the box below to get the most mind-blowing animal stories and videos delivered directly to your inbox every day.
Thanks for subscribing!
When it comes to dinosaurs, there is still so much for us to learn, including the differences between apatosaurus vs brontosaurus. While the debate over whether or not these two creatures are related still continues, there is scientific evidence to suggest that these are two different dinosaur species rather than the same one. But how can you tell?
In this article, we will discuss the differences between apatosaurus and brontosaurus, even if those differences are subtle. We will also discuss the hotly contested debate surrounding the genus and species of these two dinosaurs, centuries after their existence. Let’s get started.
Comparing Apatosaurus vs Brontosaurus
There is enough evidence to suggest that apatosaurus were far larger than brontosaurus, and their skull shapes likely differed from one another.
Apatosaurus
Brontosaurus
Size
Huge compared to Brontosaurus
Smaller than Apatosaurus
Appearance
Over 25-50 tons; larger and lower set neck and stocky legs
Smaller than apatosaurus with long neck; different skull shape from apatosaurus
Location Found
North America
North America
Era
Late Jurassic Period
Late Jurassic Period
Genus or Species
Apatosaurus excelsus
Brontosaurus excelsus
Key Differences Between Apatosaurus vs Brontosaurus
There are only so many differences between apatosaurus vs brontosaurus which is why the debate over whether or not they are different species continues. However, there is enough evidence to suggest that apatosaurus were far larger than brontosaurus, and their skull shapes differed from one another. However, these dinosaurs existed during the same era and fossils for both were found in the same locations in North America, so it is difficult to say for certain.
Let’s discuss some of these possible differences in more detail so that you can make your own informed decision.
It is estimated that apatosaurus often exceeded 25 tons in weight, while brontosaurus weighed closer to 15 tons.
Apatosaurus vs Brontosaurus: Locations Found and Era Alive
Both apatosaurus vs brontosaurus existed in the late Jurassic era, roughly 145-200 million years ago. However, many scientists believe that brontosaurus were extinct by the end of the Jurassic period, making their time on earth relatively short when compared to apatosaurus. Given an overall lack of research and information, there is no guarantee that this is even true.
Both apatosaurus and brontosaurus were discovered in North America, which suggests that these two dinosaurs may have existed in the same environments and areas. Again, given a lack of fossil records and information, it is difficult to confirm this fact.
Apatosaurus vs Brontosaurus: Size
A potential difference between apatosaurus vs brontosaurus is their overall size. Apatosaurus were theoretically larger than brontosaurus, by a great deal. It is estimated that apatosaurus often exceeded 25 tons in weight, while brontosaurus weighed closer to 15 tons. This also indicates that the overall body mass of apatosaurus far outweighed that of the brontosaurus.
Brontosaurus appear to have thinner and longer tails when compared to apatosaurus.
Apatosaurus vs Brontosaurus: Physical Appearance
There are some potential physical differences between apatosaurus vs brontosaurus. We have already discussed that apatosaurus are larger than brontosaurus, but there are some other physical differences as well. One of the main differences that could be obvious but one we don’t know is the skull shape of these two dinosaurs. This is because a brontosaurus skull has never been properly discovered or found.
However, besides this, brontosaurus appear to have thinner and longer tails when compared to apatosaurus. The ribs of brontosaurus may have also been larger and taller than that of the Apatosaurus, leading to a larger barrel chest. Both dinosaurs have spines along their backs, and extremely long necks that scientists are still baffled by.
The overall size of both of these dinosaurs confuses scientists and researchers to this day. Many discoveries have indicated that both apatosaurus and brontosaurus bones are full of holes in order to lighten the load on their bodies. However, given the large size of apatosaurus, it is suggested that they spend the majority of their lives in the water in order to lend some buoyancy to their bodies. Brontosaurus existed primarily on land, which is another key difference between these two creatures.
Apatosaurus were theoretically larger than brontosaurus, by a great deal.
Apatosaurus vs Brontosaurus: The Genus Debate
While this is less of a difference and more of a debate, recent studies have suggested that apatosaurus and brontosaurus are indeed different species rather than the same animal that has simply been renamed. Let’s talk more about the history of these dinosaurs now.
When apatosaurus were first discovered, they were discovered alongside multiple different specimens and fossils. The classification of the apatosaurus and brontosaurus species began back in the early 1900s, and scientists decided back then both apatosaurus and brontosaurus were dinosaurs of the same species. However, as technology has progressed and more research has been proven, this may not be the case any longer.
A study performed in 2015 suggests that brontosaurus are different enough from apatosaurus to Merit their own species classification. Furthermore, both apatosaurus and brontosaurus have three different subspecies classifications under their species umbrella. While there is still far more research to be done and many paleontologists disagree whether or not they are indeed separate creatures, the debate continues. It is a fascinating thing to witness in modern times, And hopefully both apatosaurus and brontosaurus received the recognition they deserve!
The Featured Image
I am a non-binary freelance writer working full-time in Oregon. Graduating Southern Oregon University with a BFA in Theatre and a specialization in creative writing, I have an invested interest in a variety of topics, particularly Pacific Northwest history. When I'm not writing personally or professionally, you can find me camping along the Oregon coast with my high school sweetheart and Chihuahua mix, or in my home kitchen, perfecting recipes in a gleaming cast iron skillet. | Apatosaurus vs Brontosaurus: Is There a Difference?
WATCH: Sharks biting alligators, the most epic lion battles, and MUCH more.
Enter your email in the box below to get the most mind-blowing animal stories and videos delivered directly to your inbox every day.
Thanks for subscribing!
When it comes to dinosaurs, there is still so much for us to learn, including the differences between apatosaurus vs brontosaurus. While the debate over whether or not these two creatures are related still continues, there is scientific evidence to suggest that these are two different dinosaur species rather than the same one. But how can you tell?
In this article, we will discuss the differences between apatosaurus and brontosaurus, even if those differences are subtle. We will also discuss the hotly contested debate surrounding the genus and species of these two dinosaurs, centuries after their existence. Let’s get started.
Comparing Apatosaurus vs Brontosaurus
There is enough evidence to suggest that apatosaurus were far larger than brontosaurus, and their skull shapes likely differed from one another.
Apatosaurus
Brontosaurus
Size
Huge compared to Brontosaurus
Smaller than Apatosaurus
Appearance
Over 25-50 tons; larger and lower set neck and stocky legs
Smaller than apatosaurus with long neck ; different skull shape from apatosaurus
Location Found
North America
North America
Era
Late Jurassic Period
Late Jurassic Period
Genus or Species
Apatosaurus excelsus
Brontosaurus excelsus
Key Differences Between Apatosaurus vs Brontosaurus
There are only so many differences between apatosaurus vs brontosaurus which is why the debate over whether or not they are different species continues. However, there is enough evidence to suggest that apatosaurus were far larger than brontosaurus, and their skull shapes differed from one another. However, these dinosaurs existed during the same era and fossils for both were found in the same locations in North America, so it is difficult to say for certain.
| no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://www.scientificamerican.com/article/the-brontosaurus-is-back1/ | The Brontosaurus Is Back - Scientific American | Some of the largest animals to ever walk on Earth were the long-necked, long-tailed dinosaurs known as the sauropods—and the most famous of these giants is probably Brontosaurus, the "thunder lizard." Deeply rooted as this titan is in the popular imagination, however, for more than a century scientists thought it never existed.
The first of the Brontosaurus genus was named in 1879 by famed paleontologist Othniel Charles Marsh. The specimen still stands on display in the Great Hall of Yale's Peabody Museum of Natural History. In 1903, however, paleontologist Elmer Riggs found that Brontosaurus was apparently the same as the genus Apatosaurus, which Marsh had first described in 1877. In such cases the rules of scientific nomenclature state that the oldest name has priority, dooming Brontosaurus to another extinction.
Now a new study suggests resurrecting Brontosaurus. It turns out the original Apatosaurus and Brontosaurus fossils appear different enough to belong to separate groups after all. "Generally, Brontosaurus can be distinguished from Apatosaurus most easily by its neck, which is higher and less wide," says lead study author Emanuel Tschopp, a vertebrate paleontologist at the New University of Lisbon in Portugal. "So although both are very massive and robust animals, Apatosaurus is even more extreme than Brontosaurus."
The nearly 300-page study analyzed 477 different physical features of 81 sauropod specimens, involving five years of research and numerous visits to museum collections in Europe and the U.S. The initial goal of the research was to clarify the relationships among the species making up the family of sauropods known as the diplodocids, which includes Diplodocus,Apatosaurus and now Brontosaurus.
The scientists conclude that three known species of Brontosaurus exist: Brontosaurus excelsus, the first discovered, as well as B. parvus and B. yahnahpin. Tschopp and his colleagues Octávio Mateus and Roger Benson detailed their findings online April 7 in PeerJ. "We're delighted that Brontosaurus is back," says Jacques Gauthier, curator of vertebrate paleontology and vertebrate zoology at Peabody, who did not participate in this study. "I grew up knowing about Brontosaurus—what a great name, 'thunder lizard'—and never did like that it sank into Apatosaurus."
For vertebrate paleontologist Mike Taylor at the University of Bristol in England, who did not take part in this research, the most exciting thing about this study is "the magnificent comprehensiveness of the work this group has done, the beautifully detailed and informative illustrations and the degree of care taken to make all their work reproducible and verifiable. It really sets a new standard. I am in awe of the authors," he says. Vertebrate paleontologist Mathew Wedel at Western University of Health Sciences in Pomona, Calif., who also did not collaborate on this paper, agrees, saying "the incredible amount of work here is what other research is going to be building on for decades."
Tschopp notes their research would have been impossible at this level of detail 15 or more years ago. It was only with many recent findings of dinosaurs similar to Apatosaurus and Brontosaurus that it became possible to reexamine how different they actually were and breathe new life into Brontosaurus, he says.
Although while Kenneth Carpenter, director and curator of paleontology at Utah State University Eastern's Prehistoric Museum, finds this study impressive, he notes the fossil on which Apatosaurus is based has never been described in detail, and suggests the researchers should have done so if they wanted to compare it with Brontosaurus. "So is Brontosaurus valid after all?" he asks. "Maybe. But I think the verdict is still out."
All in all, these findings emphasize "that sauropods were much more diverse and fascinating than we've realized," Taylor says. Indeed, the recognition of Brontosaurus as separate from Apatosaurus is "only the tip of the iceberg," he adds. "The big mounted apatosaur at the American Museum of Natural History is probably something different again, yet to be named. Yet another nice complete apatosaur, which is in a museum in Tokyo, is probably yet another new and distinct dinosaur."
This sauropod diversity emphasizes "that the Late Jurassic [period] of North America in which they lived may have been a weird time," Wedel says. "You basically had an explosion of these things in what could be harsh environments, which raises the question of how they could have found enough food to have supported them all." In other words, research that helped resurrect Brontosaurus may have birthed new mysteries as well.
Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers. | Some of the largest animals to ever walk on Earth were the long-necked, long-tailed dinosaurs known as the sauropods—and the most famous of these giants is probably Brontosaurus, the "thunder lizard." Deeply rooted as this titan is in the popular imagination, however, for more than a century scientists thought it never existed.
The first of the Brontosaurus genus was named in 1879 by famed paleontologist Othniel Charles Marsh. The specimen still stands on display in the Great Hall of Yale's Peabody Museum of Natural History. In 1903, however, paleontologist Elmer Riggs found that Brontosaurus was apparently the same as the genus Apatosaurus, which Marsh had first described in 1877. In such cases the rules of scientific nomenclature state that the oldest name has priority, dooming Brontosaurus to another extinction.
Now a new study suggests resurrecting Brontosaurus. It turns out the original Apatosaurus and Brontosaurus fossils appear different enough to belong to separate groups after all. "Generally, Brontosaurus can be distinguished from Apatosaurus most easily by its neck, which is higher and less wide," says lead study author Emanuel Tschopp, a vertebrate paleontologist at the New University of Lisbon in Portugal. "So although both are very massive and robust animals, Apatosaurus is even more extreme than Brontosaurus. "
The nearly 300-page study analyzed 477 different physical features of 81 sauropod specimens, involving five years of research and numerous visits to museum collections in Europe and the U.S. The initial goal of the research was to clarify the relationships among the species making up the family of sauropods known as the diplodocids, which includes Diplodocus,Apatosaurus and now Brontosaurus.
The scientists conclude that three known species of Brontosaurus exist: Brontosaurus excelsus, the first discovered, as well as B. parvus and B. yahnahpin. Tschopp and his colleagues Octávio Mateus and Roger Benson detailed their findings online April 7 in PeerJ. " | no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://www.nationalgeographic.com/science/article/150407-brontosaurus-back-return-apatosaurus-sauropod-dinosaurs-fossils-paleontology | Brontosaurus Stomps Back to Claim Its Status as Real Dinosaur | Brontosaurus Stomps Back to Claim Its Status as Real Dinosaur
Like Pluto losing its standing as a planet, Brontosaurus became a non-species. Now scientists say that may have been the wrong call.
ByRalph MartinsNational Geographic
Published April 7, 2015
• 4 min read
Brontosaurus, as imagined by paleontologists in the late 1800s: aquatic, and wearing a Camarasaurus skull. Later research would show that the sauropod actually had a slim, horselike skull.
Please be respectful of copyright. Unauthorized use is prohibited.
If you grew up loving Brontosaurus only to be told it wasn't a real dinosaur, it's time to rejoice: the gentle giant may have received a new lease on life.
The giant sauropod, long thought to be an Apatosaurus that someone got wrong, was actually its own type of dinosaur all along, scientists say Tuesday in PeerJ.
In fact, Apatosaurus and Brontosaurus were different enough to be separate genera, rather than related species of the same genus.
The finding comes from a study on the evolution of diplodocids, the family to which these dinosaurs belonged. These giant herbivores lived in North America, Europe, and parts of Africa during the late Jurassic period, between 160 million and 145 million years ago.
“They’re a very widespread family, and we wanted to know more about relationships within the family,” says co-author Octávio Mateus, a paleontologist at the Universidade Nova de Lisboa in Portugal.
The new study revises the diplodocid family tree to feature Brontosaurus as an (old) new genus.
The Dino That Never Was
Brontosaurus has a colorful history. Named by O.C. Marsh in the 1880s, the dinosaur was identified in 1903 as a member of the Apatosaurus genus, which Marsh had found a few years earlier.
Since taxonomy honors the name that came first, Brontosaurus excelsus became Apatosaurus excelsus.
But the evocative name—which means "thunder lizard" in Greek—would live on for decades, until 1970s researchers ended the debate by showing that Brontosaurus and Apatosaurus had very similar skulls.
So the “thunder lizard” was condemned to the realm of the scientifically invalid, becoming the dinosaur that “never even existed.”
A New Order
Classifying dinosaurs from a fossil record is difficult, says Mateus. For one, “bones can’t tell us whether animals could reproduce with each other”—a good sign that they’re of the same species.
But the discovery of several diplodocid specimens in recent years has allowed a new approach: specimen-based analysis. Given enough specimens, scientists can examine and compare bones to show how animals are related.
Researchers looked at 81 diplodocid specimens, noting the presence or absence of each of 477 skeletal features. Closely related species shared a lot of these features, while species from different genera—like Brontosaurus and Apatosaurus—had much less in common.
Since skeletons vary between individuals as well, the researchers looked for a minimum number of clear differences—and at where the animals lived—to determine whether dinosaurs were actually different species.
Their findings showed that the diplodocid family should be expanded to include two more genera—Brontosaurus and Galeamopus.
On the other hand, it turns out that Dinheirosaurus and Supersaurus were really just a single genus.
This is
Brontosaurus as researchers see it today -- with a
Diplodocus-like head.
Illustration by Davide Bonadonna
Please be respectful of copyright. Unauthorized use is prohibited.
Secrets in Stone
These fossils have been at the mercy of the elements for more than a hundred million years, but we can be pretty confident that Brontosaurus is back, says Mike Taylor, a paleontologist at the University of Bristol in the U.K.
It’s not unusual for bones to be distorted by natural forces, and allowing for warping is “part of the process,” Taylor says.
“What’s outstanding about this study is the extraordinary level of detail,” Taylor adds. “It’s going to be very easy for people to build on what they’ve done.”
Applying this method to other dinosaur families will help scientists understand evolution etched in fossil records, says Mateus.
An oil rig that environmentalists love? Here’s the real story.
Off the coast of California, an oil platform named Eureka is nearing the end of its lifespan and its time in the ocean—but it is home to a thriving ecosystem of marine life comparable to natural coral reef systems. | Brontosaurus Stomps Back to Claim Its Status as Real Dinosaur
Like Pluto losing its standing as a planet, Brontosaurus became a non-species. Now scientists say that may have been the wrong call.
ByRalph MartinsNational Geographic
Published April 7, 2015
• 4 min read
Brontosaurus, as imagined by paleontologists in the late 1800s: aquatic, and wearing a Camarasaurus skull. Later research would show that the sauropod actually had a slim, horselike skull.
Please be respectful of copyright. Unauthorized use is prohibited.
If you grew up loving Brontosaurus only to be told it wasn't a real dinosaur, it's time to rejoice: the gentle giant may have received a new lease on life.
The giant sauropod, long thought to be an Apatosaurus that someone got wrong, was actually its own type of dinosaur all along, scientists say Tuesday in PeerJ.
In fact, Apatosaurus and Brontosaurus were different enough to be separate genera, rather than related species of the same genus.
The finding comes from a study on the evolution of diplodocids, the family to which these dinosaurs belonged. These giant herbivores lived in North America, Europe, and parts of Africa during the late Jurassic period, between 160 million and 145 million years ago.
“They’re a very widespread family, and we wanted to know more about relationships within the family,” says co-author Octávio Mateus, a paleontologist at the Universidade Nova de Lisboa in Portugal.
The new study revises the diplodocid family tree to feature Brontosaurus as an (old) new genus.
The Dino That Never Was
Brontosaurus has a colorful history. Named by O.C. Marsh in the 1880s, the dinosaur was identified in 1903 as a member of the Apatosaurus genus, which Marsh had found a few years earlier.
| no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://www.npr.org/2012/12/09/166665795/forget-extinct-the-brontosaurus-never-even-existed | Forget Extinct: The Brontosaurus Never Even Existed : NPR | Forget Extinct: The Brontosaurus Never Even ExistedEven if you knew that, you may not know how the fictional dinosaur came to star in the prehistoric landscape of popular imagination for so long. The story starts 130 years ago, in a time known as the "Bone Wars."
Apatosaurus (right, opposite a Diplodocus skeleton at the Carnegie Museum of Natural History in Pittsburgh), is what paleontologist Othniel Charles Marsh actually found when he thought he'd discovered the Brontosaurus.
Joshua Franzos/Carnegie Museum of Natural History
hide caption
toggle caption
Joshua Franzos/Carnegie Museum of Natural History
Apatosaurus (right, opposite a Diplodocus skeleton at the Carnegie Museum of Natural History in Pittsburgh), is what paleontologist Othniel Charles Marsh actually found when he thought he'd discovered the Brontosaurus.
Joshua Franzos/Carnegie Museum of Natural History
It may have something to do with all those Brontosaurus burgers everyone's favorite modern stone-age family ate, but when you think of a giant dinosaur with a tiny head and long, swooping tail, the Brontosaurus is probably what you're seeing in your mind.
Well hold on: Scientifically speaking, there's no such thing as a Brontosaurus.
Even if you knew that, you may not know how the fictional dinosaur came to star in the prehistoric landscape of popular imagination for so long.
It dates back 130 years, to a period of early U.S. paleontology known as the Bone Wars, says Matt Lamanna, curator at the Carnegie Museum of Natural History in Pittsburgh.
Othniel Charles Marsh was a professor of paleontology at Yale who made many dinosaur fossil discoveries, including the Apatosaurus — and the fictional Brontosaurus.
Hulton Archive/Getty Images
hide caption
toggle caption
Hulton Archive/Getty Images
The Bone Wars was the name given to a bitter competition between two paleontologists, Yale's O.C. Marsh and Edward Drinker Cope of Philadelphia. Lamanna says their mutual dislike, paired with their scientific ambition, led them to race dinosaur names into publication, each trying to outdo the other.
"There are stories of either Cope or Marsh telling their fossil collectors to smash skeletons that were still in the ground, just so the other guy couldn't get them," Lamanna tells Guy Raz, host of weekends on All Things Considered. "It was definitely a bitter, bitter rivalry."
The two burned through money, and were as much fame-hungry trailblazers as scientists.
It was in the heat of this competition, in 1877, that Marsh discovered the partial skeleton of a long-necked, long-tailed, leaf-eating dinosaur he dubbed Apatosaurus. It was missing a skull, so in 1883 when Marsh published a reconstruction of his Apatosaurus, Lamanna says he used the head of another dinosaur — thought to be a Camarasaurus — to complete the skeleton.
"Two years later," Lamanna says, "his fossil collectors that were working out West sent him a second skeleton that he thought belonged to a different dinosaur that he named Brontosaurus."
But it wasn't a different dinosaur. It was simply a more complete Apatosaurus — one that Marsh, in his rush to one-up Cope, carelessly and quickly mistook for something new.
This photograph from 1934 shows the Carnegie Museum's Apatosaurus skeleton on the right — wearing the wrong skull.
Carnegie Museum of Natural History
hide caption
toggle caption
Carnegie Museum of Natural History
This photograph from 1934 shows the Carnegie Museum's Apatosaurus skeleton on the right — wearing the wrong skull.
Carnegie Museum of Natural History
Although the mistake was spotted by scientists by 1903, the Brontosaurus lived on, in movies, books and children's imaginations. The Carnegie Museum in Pittsburgh even topped its Apatosaurus skeleton with the wrong head in 1932. The apathy of the scientific community and a dearth of well-preserved Apatosaurus skulls kept it there for nearly 50 years.
That Brontosaurus finally met its end in the 1970s when two Carnegie researchers took a second look at the controversy. They determined a skull found in a quarry in Utah in 1910 was the true Apatosaurus skull. In 1979 the correct head was placed atop the museum's skeleton.
The Brontosaurus was gone at last, but Lamanna suggests the name stuck in part because it was given at a time when the Bone Wars fueled intense public interest in the discovery of new dinosaurs. And, he says, it's just a better name. | Well hold on: Scientifically speaking, there's no such thing as a Brontosaurus.
Even if you knew that, you may not know how the fictional dinosaur came to star in the prehistoric landscape of popular imagination for so long.
It dates back 130 years, to a period of early U.S. paleontology known as the Bone Wars, says Matt Lamanna, curator at the Carnegie Museum of Natural History in Pittsburgh.
Othniel Charles Marsh was a professor of paleontology at Yale who made many dinosaur fossil discoveries, including the Apatosaurus — and the fictional Brontosaurus.
Hulton Archive/Getty Images
hide caption
toggle caption
Hulton Archive/Getty Images
The Bone Wars was the name given to a bitter competition between two paleontologists, Yale's O.C. Marsh and Edward Drinker Cope of Philadelphia. Lamanna says their mutual dislike, paired with their scientific ambition, led them to race dinosaur names into publication, each trying to outdo the other.
"There are stories of either Cope or Marsh telling their fossil collectors to smash skeletons that were still in the ground, just so the other guy couldn't get them," Lamanna tells Guy Raz, host of weekends on All Things Considered. "It was definitely a bitter, bitter rivalry. "
The two burned through money, and were as much fame-hungry trailblazers as scientists.
It was in the heat of this competition, in 1877, that Marsh discovered the partial skeleton of a long-necked, long-tailed, leaf-eating dinosaur he dubbed Apatosaurus. It was missing a skull, so in 1883 when Marsh published a reconstruction of his Apatosaurus, Lamanna says he used the head of another dinosaur — thought to be a Camarasaurus — to complete the skeleton.
"Two years later," Lamanna says, | yes |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://en.wikipedia.org/wiki/Apatosaurus | Apatosaurus - Wikipedia | The cervical vertebrae of Apatosaurus are less elongated and more heavily constructed than those of Diplodocus, a diplodocid like Apatosaurus, and the bones of the leg are much stockier despite being longer, implying that Apatosaurus was a more robust animal. The tail was held above the ground during normal locomotion. Apatosaurus had a single claw on each forelimb and three on each hindlimb. The Apatosaurus skull, long thought to be similar to Camarasaurus, is much more similar to that of Diplodocus. Apatosaurus was a generalized browser that likely held its head elevated. To lighten its vertebrae, Apatosaurus had air sacs that made the bones internally full of holes. Like that of other diplodocids, its tail may have been used as a whip to create loud noises, or, as more recently suggested, as a sensory organ.
The skull of Apatosaurus was confused with that of Camarasaurus and Brachiosaurus until 1909, when the holotype of A. louisae was found, and a complete skull just a few meters away from the front of the neck. Henry Fairfield Osborn disagreed with this association, and went on to mount a skeleton of Apatosaurus with a Camarasaurus skull cast. Apatosaurus skeletons were mounted with speculative skull casts until 1970, when McIntosh showed that more robust skulls assigned to Diplodocus were more likely from Apatosaurus.
Apatosaurus is a genus in the family Diplodocidae. It is one of the more basal genera, with only Amphicoelias and possibly a new, unnamed genus more primitive. Although the subfamily Apatosaurinae was named in 1929, the group was not used validly until an extensive 2015 study. Only Brontosaurus is also in the subfamily, with the other genera being considered synonyms or reclassified as diplodocines. Brontosaurus has long been considered a junior synonym of Apatosaurus; its type species was reclassified as A.excelsus in 1903. A 2015 study concluded that Brontosaurus is a valid genus of sauropod distinct from Apatosaurus, but not all paleontologists agree with this division. As it existed in North America during the late Jurassic, Apatosaurus would have lived alongside dinosaurs such as Allosaurus, Camarasaurus, Diplodocus, and Stegosaurus.
Comparison of A. ajax (orange) and A.louisae (red) with a human (blue) and Brontosaurus parvus (green)
Apatosaurus was a large, long-necked, quadrupedal animal with a long, whip-like tail. Its forelimbs were slightly shorter than its hindlimbs. Most size estimates are based on specimen CM3018, the type specimen of A.louisae, reaching 21–23 m (69–75 ft) in length and 16.4–22.4 t (16.1–22.0 long tons; 18.1–24.7 short tons) in body mass.[5][6][7][8] A 2015 study that estimated the mass of volumetric models of Dreadnoughtus, Apatosaurus, and Giraffatitan estimates CM3018 at 21.8–38.2 t (21.5–37.6 long tons; 24.0–42.1 short tons), similar in mass to Dreadnoughtus.[9] Some specimens of A.ajax (such as OMNH1670) represent individuals 11–30% longer, suggesting masses twice that of CM3018 or 32.7–72.6 t (32.2–71.5 long tons; 36.0–80.0 short tons), potentially rivaling the largest titanosaurs.[10] However, the upper size estimate of OMNH1670 is likely an exaggeration, with the size estimates revised in 2020 at 30 m (98 ft) in length and 33 t (36 short tons) in body mass based on volumetric analysis.[11]
A. ajax skull, specimen CMC VP 7180
The skull is small in relation to the size of the animal. The jaws are lined with spatulate (chisel-like) teeth suited to an herbivorous diet.[12] The snout of Apatosaurus and similar diplodocoids is squared, with only Nigersaurus having a squarer skull.[13] The braincase of Apatosaurus is well preserved in specimen BYU17096, which also preserved much of the skeleton. A phylogenetic analysis found that the braincase had a morphology similar to those of other diplodocoids.[14] Some skulls of Apatosaurus have been found still in articulation with their teeth. Those teeth that have the enamel surface exposed do not show any scratches on the surface; instead, they display a sugary texture and little wear.[13]
Cervical vertebra of A. ajax (holotype, YPM 1860) in side and anterior view
Like those of other sauropods, the neck vertebrae are deeply bifurcated; they carried neural spines with a large trough in the middle, resulting in a wide, deep neck.[12] The vertebral formula for the holotype of A.louisae is 15cervicals, 10dorsals, 5sacrals, and 82caudals. The caudal vertebra number may vary, even within species.[15] The cervical vertebrae of Apatosaurus and Brontosaurus are stouter and more robust than those of other diplodocids and were found to be most similar to Camarasaurus by Charles Whitney Gilmore.[15][16] In addition, they support cervical ribs that extend farther towards the ground than in diplodocines, and have vertebrae and ribs that are narrower towards the top of the neck, making the neck nearly triangular in cross-section.[16] In Apatosaurus louisae, the atlas-axis complex of the first cervicals is nearly fused. The dorsal ribs are not fused or tightly attached to their vertebrae and are instead loosely articulated.[15]Apatosaurus has ten dorsal ribs on either side of the body.[17] The large neck was filled with an extensive system of weight-saving air sacs. Apatosaurus, like its close relative Supersaurus, has tall neural spines, which make up more than half the height of the individual bones of its vertebrae. The shape of the tail is unusual for a diplodocid; it is comparatively slender because of the rapidly decreasing height of the vertebral spines with increasing distance from the hips. Apatosaurus also had very long ribs compared to most other diplodocids, giving it an unusually deep chest.[18] As in other diplodocids, the tail transformed into a whip-like structure towards the end.[15]
The limb bones are also very robust.[18] Within Apatosaurinae, the scapula of Apatosaurus louisae is intermediate in morphology between those of A.ajax and Brontosaurus excelsus. The arm bones are stout, so the humerus of Apatosaurus resembles that of Camarasaurus, as well as Brontosaurus. However, the humeri of Brontosaurus and A.ajax are more similar to each other than they are to A.louisae. In 1936, Charles Gilmore noted that previous reconstructions of Apatosaurus forelimbs erroneously proposed that the radius and ulna could cross; in life they would have remained parallel.[15]Apatosaurus had a single large claw on each forelimb, a feature shared by all sauropods more derived than Shunosaurus.[15][19] The first three toes had claws on each hindlimb. The phalangeal formula is 2-1-1-1-1, meaning the innermost finger (phalanx) on the forelimb has two bones and the next has one.[20] The single manual claw bone (ungual) is slightly curved and squarely truncated on the anterior end. The pelvic girdle includes the robust ilia, and the fused (co-ossified) pubes and ischia. The femora of Apatosaurus are very stout and represent some of the most robust femora of any member of Sauropoda. The tibia and fibula bones are different from the slender bones of Diplodocus but are nearly indistinguishable from those of Camarasaurus. The fibula is longer and slenderer than the tibia. The foot of Apatosaurus has three claws on the innermost digits; the digit formula is 3-4-5-3-2. The first metatarsal is the stoutest, a feature shared among diplodocids.[15][21]
The first Apatosaurus fossils were discovered by Arthur Lakes, a local miner, and his friend Henry C. Beckwith in the spring of 1877 in Morrison, a town in the eastern foothills of the Rocky Mountains in Jefferson County, Colorado. Arthur Lakes wrote to Othniel Charles Marsh, Professor of Paleontology at Yale University, and Edward Drinker Cope, paleontologist based in Philadelphia, about the discovery until eventually collecting several fossils and sending them to both paleontologists. Marsh named Atlantosaurus montanus based on some of the fossils sent and hired Lakes to collect the rest of the material at Morrison and send it to Yale, while Cope attempted to hire Lakes as well but was rejected.[22] One of the best specimens collected by Lakes in 1877 was a well preserved partial postcranial skeleton, including many vertebrae, and a partial braincase (YPM VP 1860), which was sent to Marsh and named Apatosaurus ajax in November 1877.[23][22] The composite term Apatosaurus comes from the Greek words apatē (ἀπάτη)/apatēlos (ἀπατηλός) meaning "deception"/"deceptive", and sauros (σαῦρος) meaning "lizard";[24] thus, "deceptive lizard". Marsh gave it this name based on the chevron bones, which are dissimilar to those of other dinosaurs; instead, the chevron bones of Apatosaurus showed similarities with those of mosasaurs,[25][26] most likely that of the representative species Mosasaurus. By the end of excavations at Lakes' quarry in Morrison, several partial specimens of Apatosaurus had been collected, but only the type specimen of A. ajax can be confidently referred to the species.[27][23]
During excavation and transportation, the bones of the holotype skeleton were mixed with those of another Apatosaurine individual originally described as Atlantosaurus immanis; as a consequence, some elements cannot be ascribed to either specimen with confidence.[28] Marsh distinguished the new genus Apatosaurus from Atlantosaurus on the basis of the number of sacral vertebrae, with Apatosaurus possessing three and Atlantosaurus four. Recent research shows that traits usually used to distinguish taxa at this time were actually widespread across several taxa, causing many of the taxa named to be invalid, like Atlantosaurus.[23] Two years later, Marsh announced the discovery of a larger and more complete specimen (YPM VP 1980) from Como Bluff, Wyoming, he gave this specimen the name Brontosaurus excelsus.[29] Also at Como Bluff, the Hubbell brothers working for Edward Drinker Cope collected a tibia, fibula, scapula, and several caudal vertebrae along with other fragments belonging to Apatosaurus in 1877–78 at Cope's Quarry 5 at the site.[30] Later in 1884, Othniel Marsh named Diplodocus lacustris based on a chimeric partial dentary, snout, and several teeth collected by Lakes in 1877 at Morrison.[23][31] In 2013, it was suggested that the dentary of D. lacustris and its teeth were actually from Apatosaurus ajax based on its proximity to the type braincase of A. ajax.[31] All specimens currently considered Apatosaurus were from the Morrison Formation, the location of the excavations of Marsh and Cope.[32]
After the end of the Bone Wars, many major institutions in the eastern United States were inspired by the depictions and finds by Marsh and Cope to assemble their own dinosaur fossil collections.[33] The competition to mount the first sauropod skeleton specifically was the most intense, with the American Museum of Natural History, Carnegie Museum of Natural History, and Field Museum of Natural History all sending expeditions to the west to find the most complete sauropod specimen,[33] bring it back to the home institution, and mount it in their fossil halls.[33] The American Museum of Natural History was the first to launch an expedition,[33] finding a well preserved skeleton (AMNH 460), which is occasionally assigned to Apatosaurus, is considered nearly complete; only the head, feet, and sections of the tail are missing, and it was the first sauropod skeleton mounted.[34] The specimen was found north of Medicine Bow, Wyoming, in 1898 by Walter Granger, and took the entire summer to extract.[35] To complete the mount, sauropod feet that were discovered at the same quarry and a tail fashioned to appear as Marsh believed it should – but which had too few vertebrae – were added. In addition, a sculpted model of what the museum thought the skull of this massive creature might look like was made. This was not a delicate skull like that of Diplodocus – which was later found to be more accurate – but was based on "the biggest, thickest, strongest skull bones, lower jaws and tooth crowns from three different quarries".[15][17][34][36] These skulls were likely those of Camarasaurus, the only other sauropod for which good skull material was known at the time. The mount construction was overseen by Adam Hermann, who failed to find Apatosaurus skulls. Hermann was forced to sculpt a stand-in skull by hand. Osborn said in a publication that the skull was "largely conjectural and based on that of Morosaurus" (now Camarasaurus).[37]
In 1903, Elmer Riggs published a study that described a well-preserved skeleton of a diplodocid from the Grand River Valley near Fruita, Colorado, Field Museum of Natural History specimen P25112. Riggs thought that the deposits were similar in age to those of the Como Bluff in Wyoming from which Marsh had described Brontosaurus. Most of the skeleton was found, and after comparison with both Brontosaurus and Apatosaurus ajax, Riggs realized that the holotype of A.ajax was immature, and thus the features distinguishing the genera were not valid. Since Apatosaurus was the earlier name, Brontosaurus should be considered a junior synonym of Apatosaurus. Because of this, Riggs recombined Brontosaurus excelsus as Apatosaurus excelsus. Based on comparisons with other species proposed to belong to Apatosaurus, Riggs also determined that the Field Columbian Museum specimen was likely most similar to A.excelsus.[17]
Despite Riggs' publication, Henry Fairfield Osborn, who was a strong opponent of Marsh and his taxa, labeled the Apatosaurus mount of the American Museum of Natural HistoryBrontosaurus.[37][38] Because of this decision the name Brontosaurus was commonly used outside of scientific literature for what Riggs considered Apatosaurus, and the museum's popularity meant that Brontosaurus became one of the best known dinosaurs, even though it was invalid throughout nearly all of the 20th and early 21st centuries.[39]
It was not until 1909 that an Apatosaurus skull was found during the first expedition, led by Earl Douglass, to what would become known as the Carnegie Quarry at Dinosaur National Monument. The skull was found a short distance from a skeleton (specimen CM3018) identified as the new species Apatosaurus louisae, named after Louise Carnegie, wife of Andrew Carnegie, who funded field research to find complete dinosaur skeletons in the American West. The skull was designated CM11162; it was very similar to the skull of Diplodocus.[38] Another smaller skeleton of A.louisae was found nearby CM11162 and CM3018.[40] The skull was accepted as belonging to the Apatosaurus specimen by Douglass and Carnegie Museum director William H. Holland, although other scientists – most notably Osborn – rejected this identification. Holland defended his view in 1914 in an address to the Paleontological Society of America, yet he left the Carnegie Museum mount headless. While some thought Holland was attempting to avoid conflict with Osborn, others suspected Holland was waiting until an articulated skull and neck were found to confirm the association of the skull and skeleton.[37] After Holland's death in 1934, museum staff placed a cast of a Camarasaurus skull on the mount.[38]
While most other museums were using cast or sculpted Camarasaurus skulls on Apatosaurus mounts, the Yale Peabody Museum decided to sculpt a skull based on the lower jaw of a Camarasaurus, with the cranium based on Marsh's 1891 illustration of the skull. The skull also included forward-pointing nasals – something unusual for any dinosaur – and fenestrae differing from both the drawing and other skulls.[37]
Side view of A. louisae CM3018 mounted with a cast of skull CM11162
No Apatosaurus skull was mentioned in literature until the 1970s when John Stanton McIntosh and David Berman redescribed the skulls of Diplodocus and Apatosaurus. They found that though he never published his opinion, Holland was almost certainly correct, that Apatosaurus had a Diplodocus-like skull. According to them, many skulls long thought to pertain to Diplodocus might instead be those of Apatosaurus. They reassigned multiple skulls to Apatosaurus based on associated and closely associated vertebrae. Even though they supported Holland, it was noted that Apatosaurus might have possessed a Camarasaurus-like skull, based on a disarticulated Camarasaurus-like tooth found at the precise site where an Apatosaurus specimen was found years before.[36] On October20, 1979, after the publications by McIntosh and Berman, the first true skull of Apatosaurus was mounted on a skeleton in a museum, that of the Carnegie.[38] In 1998, it was suggested that the Felch Quarry skull that Marsh had included in his 1896 skeletal restoration instead belonged to Brachiosaurus.[41]
In 2011, the first specimen of Apatosaurus where a skull was found articulated with its cervical vertebrae was described. This specimen, CMCVP7180, was found to differ in both skull and neck features from A.louisae, but shared many features of the cervical vertebrae with A.ajax.[42] Another well-preserved skull is Brigham Young University specimen 17096, a well-preserved skull and skeleton, with a preserved braincase. The specimen was found in Cactus Park Quarry in western Colorado.[14] In 2013, Matthew Mossbrucker and several other authors published an abstract that described a premaxilla and maxilla from Lakes' original quarry in Morrison and referred the material to Apatosaurus ajax.[31]
Infographic explaining the history of Brontosaurus and Apatosaurus according to Tschopp etal. 2015
Almost all modern paleontologists agreed with Riggs that the two dinosaurs should be classified together in a single genus. According to the rules of the ICZN (which governs the scientific names of animals), the name Apatosaurus, having been published first, has priority as the official name; Brontosaurus was considered a junior synonym and was therefore long discarded from formal use.[43][44][45][46] Despite this, at least one paleontologist – Robert T. Bakker – argued in the 1990s that A.ajax and A.excelsus were in fact sufficiently distinct for the latter to merit a separate genus.[47]
In 2015, Emanuel Tschopp, Octávio Mateus, and Roger Benson released a paper on diplodocoid systematics, and proposed that genera could be diagnosed by thirteen differing characters, and species separated based on six. The minimum number for generic separation was chosen based on the fact that A.ajax and A.louisae differ in twelve characters, and Diplodocus carnegiei and D.hallorum differ in eleven characters. Thus, thirteen characters were chosen to validate the separation of genera. The six differing features for specific separation were chosen by counting the number of differing features in separate specimens generally agreed to represent one species, with only one differing character in D.carnegiei and A.louisae, but five differing features in B.excelsus. Therefore, Tschopp etal. argued that Apatosaurus excelsus, originally classified as Brontosaurus excelsus, had enough morphological differences from other species of Apatosaurus that it warranted being reclassified as a separate genus again. The conclusion was based on a comparison of 477 morphological characteristics across 81 different dinosaur individuals. Among the many notable differences are the wider – and presumably stronger – neck of Apatosaurus species compared to B.excelsus. Other species previously assigned to Apatosaurus, such as Elosaurus parvus and Eobrontosaurus yahnahpin were also reclassified as Brontosaurus. Some features proposed to separate Brontosaurus from Apatosaurus include: posterior dorsal vertebrae with the centrum longer than wide; the scapula rear to the acromial edge and the distal blade being excavated; the acromial edge of the distal scapular blade bearing a rounded expansion; and the ratio of the proximodistal length to transverse breadth of the astragalus 0.55 or greater.[28] Sauropod expert Michael D'Emic pointed out that the criteria chosen were to an extent arbitrary and that they would require abandoning the name Brontosaurus again if newer analyses obtained different results.[48] Mammal paleontologist Donald Prothero criticized the mass media reaction to this study as superficial and premature, concluding that he would keep "Brontosaurus" in quotes and not treat the name as a valid genus.[49]
Apatosaurine specimen AMNH 460 at the AMNH as re-mounted in 1995Apatosaurine mount (FMNH P25112) in the FMNHSpecimen NSMT-PV 20375, National Museum of Nature and Science, which may be A.ajax or a new species
Many species of Apatosaurus have been designated from scant material. Marsh named as many species as he could, which resulted in many being based upon fragmentary and indistinguishable remains. In 2005, Paul Upchurch and colleagues published a study that analyzed the species and specimen relationships of Apatosaurus. They found that A.louisae was the most basal species, followed by FMNHP25112, and then a polytomy of A.ajax, A.parvus, and A.excelsus.[21] Their analysis was revised and expanded with many additional diplodocid specimens in 2015, which resolved the relationships of Apatosaurus slightly differently, and also supported separating Brontosaurus from Apatosaurus.[28]
Apatosaurus ajax was named by Marsh in 1877 after Ajax, a hero from Greek mythology.[50] Marsh designated the incomplete, juvenile skeleton YPM1860 as its holotype. The species is less studied than Brontosaurus and A.louisae, especially because of the incomplete nature of the holotype. In 2005, many specimens in addition to the holotype were found assignable to A.ajax, YPM1840, NSMT-PV 20375, YPM1861, and AMNH460. The specimens date from the late Kimmeridgian to the early Tithonian ages.[21] In 2015, only the A.ajax holotype YPM1860 assigned to the species, with AMNH460 found either to be within Brontosaurus, or potentially its own taxon. However, YPM1861 and NSMT-PV 20375 only differed in a few characteristics, and cannot be distinguished specifically or generically from A.ajax. YPM1861 is the holotype of "Atlantosaurus" immanis, which means it might be a junior synonym of A.ajax.[28]
Apatosaurus louisae was named by Holland in 1916, being first known from a partial skeleton that was found in Utah.[51] The holotype is CM3018, with referred specimens including CM3378, CM11162, and LACM52844. The former two consist of a vertebral column; the latter two consist of a skull and a nearly complete skeleton, respectively. Apatosaurus louisae specimens all come from the late Kimmeridgian of Dinosaur National Monument.[21] In 2015, Tschopp etal. found the type specimen of Apatosaurus laticollis to nest closely with CM3018, meaning the former is likely a junior synonym of A.louisae.[28]
The cladogram below is the result of an analysis by Tschopp, Mateus, and Benson (2015). The authors analyzed most diplodocid type specimens separately to deduce which specimen belonged to which species and genus.[28]
The most complete specimen known to date, A. sp. BYU 17096 nicknamed "Einstein"
Apatosaurus grandis was named in 1877 by Marsh in the article that described A.ajax. It was briefly described, figured, and diagnosed.[15] Marsh later mentioned it was only provisionally assigned to Apatosaurus when he reassigned it to his new genus Morosaurus in 1878.[52] Since Morosaurus has been considered a synonym of Camarasaurus, C.grandis is the oldest-named species of the latter genus.[53]
Apatosaurus excelsus was the original type species of Brontosaurus, first named by Marsh in 1879. Elmer Riggs reclassified Brontosaurus as a synonym of Apatosaurus in 1903, transferring the species B.excelsus to A.excelsus. In 2015, Tschopp, Mateus, and Benson argued that the species was distinct enough to be placed in its own genus, so they reclassified it back into Brontosaurus.[28]
Apatosaurus parvus, first described from a juvenile specimen as Elosaurus in 1902 by Peterson and Gilmore, was reassigned to Apatosaurus in 1994, and then to Brontosaurus in 2015. Many other, more mature specimens were assigned to it following the 2015 study.[28]
Apatosaurus minimus was originally described as a specimen of Brontosaurus sp. in 1904 by Osborn. In 1917, Henry Mook named it as its own species, A.minimus, for a pair of ilia and their sacrum.[15][54][55] In 2012, Mike P. Taylor and Matt J. Wedel published a short abstract describing the material of A. minimus, finding it hard to place among either Diplodocoidea or Macronaria. While it was placed with Saltasaurus in a phylogenetic analysis, it was thought to represent instead some form with convergent features from many groups.[55] The study of Tschopp etal. did find that a camarasaurid position for the taxon was supported, but noted that the position of the taxon was found to be highly variable and there was no clearly more likely position.[28]
Apatosaurus alenquerensis was named in 1957 by Albert-Félix de Lapparent and Georges Zbyweski. It was based on post cranial material from Portugal. In 1990, this material was reassigned to Camarasaurus, but in 1998 it was given its own genus, Lourinhasaurus.[21] This was further supported by the findings of Tschopp etal. in 2015, where Lourinhasaurus was found to be sister to Camarasaurus and other camarasaurids.[28]
Apatosaurus is a member of the familyDiplodocidae, a clade of gigantic sauropoddinosaurs. The family includes some of the longest creatures ever to walk the earth, including Diplodocus, Supersaurus, and Barosaurus. Apatosaurus is sometimes classified in the subfamily Apatosaurinae, which may also include Suuwassea, Supersaurus, and Brontosaurus.[18][56][57]Othniel Charles Marsh described Apatosaurus as allied to Atlantosaurus within the now-defunct group Atlantosauridae.[17][25] In 1878, Marsh raised his family to the rank of suborder, including Apatosaurus, Atlantosaurus, Morosaurus (=Camarasaurus) and Diplodocus. He classified this group within Sauropoda, a group he erected in the same study. In 1903, Elmer S. Riggs said the name Sauropoda would be a junior synonym of earlier names; he grouped Apatosaurus within Opisthocoelia.[17] Sauropoda is still used as the group name.[21] In 2011, John Whitlock published a study that placed Apatosaurus a more basal diplodocid, sometimes less basal than Supersaurus.[58][59]
Cladogram of the Diplodocidae after Tschopp, Mateus, and Benson (2015).[28]
It was believed throughout the 19th and early 20th centuries that sauropods like Apatosaurus were too massive to support their own weight on dry land. It was theorized that they lived partly submerged in water, perhaps in swamps. More recent findings do not support this; sauropods are now thought to have been fully terrestrial animals.[60] A study of diplodocid snouts showed that the square snout, large proportion of pits, and fine, subparallel scratches of the teeth of Apatosaurus suggests it was a ground-height, nonselective browser.[13] It may have eaten ferns, cycadeoids, seed ferns, horsetails, and algae.[61] Stevens and Parish (2005) speculate that these sauropods fed from riverbanks on submerged water plants.[62]
A 2015 study of the necks of Apatosaurus and Brontosaurus found many differences between them and other diplodocids, and that these variations may have shown that the necks of Apatosaurus and Brontosaurus were used for intraspecific combat.[16] Various uses for the single claw on the forelimb of sauropods have been proposed. One suggestion is that they were used for defense, but their shape and size make this unlikely. It was also possible they were for feeding, but the most probable use for the claw was grasping objects such as tree trunks when rearing.[19]
Trackways of sauropods like Apatosaurus show that they may have had a range of around 25–40 km (16–25 mi) per day, and that they could potentially have reached a top speed of 20–30 km (12–19 mi) per hour.[12] The slow locomotion of sauropods may be due to their minimal muscling, or to recoil after strides.[63] A trackway of a juvenile has led some to believe that they were capable of bipedalism, though this is disputed.[64][65]
Artistic interpretation of an individual of A. louisae arching its neck down to drink
Diplodocids like Apatosaurus are often portrayed with their necks held high up in the air, allowing them to browse on tall trees. Some studies state diplodocid necks were less flexible than previously believed, because the structure of the neck vertebrae would not have allowed the neck to bend far upward, and that sauropods like Apatosaurus were adapted to low browsing or ground feeding.[61][62][66]
Other studies by Taylor find that all tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture; they argue the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from that of other animals. Apatosaurus, like Diplodocus, would have held its neck angled upward with the head pointing downward in a resting posture.[67][68] Kent Stevens and Michael Parrish (1999 and 2005) state Apatosaurus had a great feeding range; its neck could bend into a U-shape laterally.[61] The neck's range of movement would have also allowed the head to feed at the level of the feet.[62]
Matthew Cobley et al. (2013) dispute this, finding that large muscles and cartilage would have limited movement of the neck. They state the feeding ranges for sauropods like Diplodocus were smaller than previously believed, and the animals may have had to move their whole bodies around to better access areas where they could browse vegetation. As such, they might have spent more time foraging to meet their minimum energy needs.[69][70] The conclusions of Cobley etal. are disputed by Taylor, who analyzed the amount and positioning of intervertebral cartilage to determine the flexibility of the neck of Apatosaurus and Diplodocus. He found that the neck of Apatosaurus was very flexible.[67]
Given the large body mass and long neck of sauropods like Apatosaurus, physiologists have encountered problems determining how these animals breathed. Beginning with the assumption that, like crocodilians, Apatosaurus did not have a diaphragm, the dead-space volume (the amount of unused air remaining in the mouth, trachea, and air tubes after each breath) has been estimated at about 0.184 m3 (184 L) for a 30 t (30 long tons; 33 short tons) specimen. Paladino calculates its tidal volume (the amount of air moved in or out during a single breath) at 0.904 m3 (904 L) with an avian respiratory system, 0.225 m3 (225 L) if mammalian, and 0.019 m3 (19 L) if reptilian.[71]
On this basis, its respiratory system would likely have been parabronchi, with multiple pulmonary air sacs as in avian lungs, and a flow-through lung. An avian respiratory system would need a lung volume of about 0.60 m3 (600 L) compared with a mammalian requirement of 2.95 m3 (2,950 L), which would exceed the space available. The overall thoracic volume of Apatosaurus has been estimated at 1.7 m3 (1,700 L), allowing for a 0.50 m3 (500 L), four-chambered heart and a 0.90 m3 (900 L) lung capacity. That would allow about 0.30 m3 (300 L) for the necessary tissue.[71] Evidence for the avian system in Apatosaurus and other sauropods is also present in the pneumaticity of the vertebrae. Though this plays a role in reducing the weight of the animal, Wedel (2003) states they are also likely connected to air sacs, as in birds.[72]
James Spotila et al. (1991) concludes that the large body size of sauropods would have made them unable to maintain high metabolic rates because they would not have been able to release enough heat.[73] They assumed sauropods had a reptilian respiratory system. Wedel says that an avian system would have allowed it to dump more heat.[72] Some scientists state that the heart would have had trouble sustaining sufficient blood pressure to oxygenate the brain.[60] Others suggest that the near-horizontal posture of the head and neck would have eliminated the problem of supplying blood to the brain because it would not have been elevated.[61]
James Farlow (1987) calculates that an Apatosaurus-sized dinosaur about 35 t (34 long tons; 39 short tons) would have possessed 5.7 t (5.6 long tons; 6.3 short tons) of fermentation contents.[74] Assuming Apatosaurus had an avian respiratory system and a reptilian resting-metabolism, Frank Paladino etal. (1997) estimate the animal would have needed to consume only about 262 liters (58 imp gal; 69 U.S. gal) of water per day.[71]
A 1999 microscopic study of Apatosaurus and Brontosaurus bones concluded the animals grew rapidly when young and reached near-adult sizes in about 10years.[75] In 2008, a study on the growth rates of sauropods was published by Thomas Lehman and Holly Woodward. They said that by using growth lines and length-to-mass ratios, Apatosaurus would have grown to 25t (25 long tons; 28 short tons) in 15years, with growth peaking at 5,000 kg (11,000 lb) in a single year. An alternative method, using limb length and body mass, found Apatosaurus grew 520 kg (1,150 lb) per year, and reached its full mass before it was about 70years old.[76] These estimates have been called unreliable because the calculation methods are not sound; old growth lines would have been obliterated by bone remodelling.[77] One of the first identified growth factors of Apatosaurus was the number of sacral vertebrae, which increased to five by the time of the creature's maturity. This was first noted in 1903 and again in 1936.[15]
Compared with most sauropods, a relatively large amount of juvenile material is known from Apatosaurus. Multiple specimens in the OMNH are from juveniles of an undetermined species of Apatosaurus; this material includes partial shoulder and pelvic girdles, some vertebrae, and limb bones. OMNH juvenile material is from at least two different age groups and based on overlapping bones likely comes from more than three individuals. The specimens exhibit features that distinguish Apatosaurus from its relatives, and thus likely belong to the genus.[21][78] Juvenile sauropods tend to have proportionally shorter necks and tails, and a more pronounced forelimb-hindlimb disparity than found in adult sauropods.[79]
An article published in 1997 reported research of the mechanics of Apatosaurus tails by Nathan Myhrvold and paleontologist Philip J. Currie. Myhrvold carried out a computer simulation of the tail, which in diplodocids like Apatosaurus was a very long, tapering structure resembling a bullwhip. This computer modeling suggested diplodocids were capable of producing a whiplike cracking sound of over 200 decibels, comparable to the volume of a cannon being fired.[80]
A pathology has been identified on the tail of Apatosaurus, caused by a growth defect. Two caudal vertebrae are seamlessly fused along the entire articulating surface of the bone, including the arches of the neural spines. This defect might have been caused by the lack or inhibition of the substance that forms intervertebral disks or joints.[81] It has been proposed that the whips could have been used in combat and defense, but the tails of diplodocids were quite light and narrow compared to Shunosaurus and mamenchisaurids, and thus to injure another animal with the tail would severely injure the tail itself.[80] More recently, Baron (2020) considers the use of the tail as a bullwhip unlikely because of the potentially catastrophic muscle and skeletal damage such speeds could cause on the large and heavy tail. Instead, he proposes that the tails might have been used as a tactile organ to keep in touch with the individuals behind and on the sides in a group while migrating, which could have augmented cohesion and allowed communication among individuals while limiting more energetically demanding activities like stopping to search for dispersed individuals, turning to visually check on individuals behind, or communicating vocally.[82]
The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, dates from between 156.3mya at its base,[83] and 146.8mya at the top,[84] placing it in the late Oxfordian, Kimmeridgian, and early Tithonianstages of the Late Jurassic period. This formation is interpreted as originating in a locally semiarid environment with distinct wet and dry seasons. The Morrison Basin, where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan; it was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains.[85] This formation is similar in age to the Lourinhã Formation in Portugal and the Tendaguru Formation in Tanzania.[32]
Ischium of an Apatosaurus showing bite marks from a large theropod
Apatosaurus was the second most common sauropod in the Morrison Formation ecosystem, after Camarasaurus.[53][86]Apatosaurus may have been more solitary than other Morrison Formation dinosaurs.[87] Fossils of the genus have only been found in the upper levels of the formation. Those of Apatosaurus ajax are known exclusively from the upper Brushy Basin Member, about 152–151 mya. A.louisae fossils are rare, known only from one site in the upper Brushy Basin Member; they date to the late Kimmeridgian stage, about 151mya. Additional Apatosaurus remains are known from similarly aged or slightly younger rocks, but they have not been identified as any particular species,[88] and thus may instead belong to Brontosaurus.[28]
^ abMcIntosh, J.S.; Berman, D.S. (1975). "Description of the Palate and Lower Jaw of the Sauropod Dinosaur Diplodocus (Reptilia: Saurischia) with Remarks on the Nature of the Skull of Apatosaurus". Journal of Paleontology. 49 (1): 187–199. JSTOR1303324. | A 2015 study concluded that Brontosaurus is a valid genus of sauropod distinct from Apatosaurus, but not all paleontologists agree with this division. As it existed in North America during the late Jurassic, Apatosaurus would have lived alongside dinosaurs such as Allosaurus, Camarasaurus, Diplodocus, and Stegosaurus.
Comparison of A. ajax (orange) and A.louisae (red) with a human (blue) and Brontosaurus parvus (green)
Apatosaurus was a large, long-necked, quadrupedal animal with a long, whip-like tail. Its forelimbs were slightly shorter than its hindlimbs. Most size estimates are based on specimen CM3018, the type specimen of A.louisae, reaching 21–23 m (69–75 ft) in length and 16.4–22.4 t (16.1–22.0 long tons; 18.1–24.7 short tons) in body mass.[5][6][7][8] A 2015 study that estimated the mass of volumetric models of Dreadnoughtus, Apatosaurus, and Giraffatitan estimates CM3018 at 21.8–38.2 t (21.5–37.6 long tons; 24.0–42.1 short tons), similar in mass to Dreadnoughtus.[9] Some specimens of A.ajax (such as OMNH1670) represent individuals 11–30% longer, suggesting masses twice that of CM3018 or 32.7–72.6 t (32.2–71.5 long tons; 36.0–80.0 short tons), potentially rivaling the largest titanosaurs.[10] However, the upper size estimate of OMNH1670 is likely an exaggeration, with the size estimates revised in 2020 at 30 m (98 ft) in length and 33 t ( | no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://abcnews.go.com/Technology/brontosaurus-finally-validated-distinct-dinosaur-species/story?id=30134546 | Brontosaurus Finally Validated as a Distinct Dinosaur - ABC News | Brontosaurus Finally Validated as a Distinct Dinosaur
— -- On the edge of the solar system, the dwarf planet Pluto, which knows what it feels like to be banished from an exclusive club, may be cheering for the brontosaurus.
While the long-necked dinosaur's name may be known by legions of fans and even made it on to a postage stamp in the 1980s, most paleontologists would be quick to correct people that the brontosaurus is not a dinosaur.
But the iconic dinosaur name may finally be reinstated more than a century after researchers found the long-necked brontosaurus and apatosaurus likely belonged to the same genus, according to an analysis published today in the journal PeerJ.
The apatosaurus name, which was published first, took precedence while the brontosaurus was cast aside. Both dinosaurs lived approximately 150 million years ago.
"Until very recently, the claim that Brontosaurus was the same as Apatosaurus was completely reasonable, based on the knowledge we had," Emanuel Tschopp, one of the researchers from the New University of Lisbon in Portugal, said in a statement.
Setting out to analyze the differences between the large grouping of diplodocid dinosaurs, which include the apatosaurus and other dinosaurs characterized by their long necks and plant-based diets, Tschopp and his team said they did not expect to resurrect the brontosaurus.
Using more recently discovered fossil evidence of similar dinosaurs, researchers found enough distinctions that place the brontosaurus in its own genus, according to the study, which looked at 49 fossils.
They found the apotosaurus had a bulkier neck, while the slightly more slender brontosaurus stood out for a longer bone found in its ankles.
"The differences we found between Brontosaurus and Apatosaurus were at least as numerous as the ones between other closely related genera, and much more than what you normally find between species," Roger Benson, a professor at the University of Oxford and co-author of the study, said in a statement.
The brontosaurus' modern day story began in the 1870s when rival paleontologists Edward Cope and Othniel Marsh raced to publish new dinosaurs names.
Marsh first discovered the apatosaurus and then two years later found another dinosaur fossil at the same location and named it the brontosaurus.
In 1903, it was ruled that the two skeletons bared too many similarities and were different species of the same genus, which was called the apatosaurus.
More than a century later, new discoveries have continued to prove the ever-changing nature of science proving that perhaps there is hope for Pluto after all. | Brontosaurus Finally Validated as a Distinct Dinosaur
— -- On the edge of the solar system, the dwarf planet Pluto, which knows what it feels like to be banished from an exclusive club, may be cheering for the brontosaurus.
While the long-necked dinosaur's name may be known by legions of fans and even made it on to a postage stamp in the 1980s, most paleontologists would be quick to correct people that the brontosaurus is not a dinosaur.
But the iconic dinosaur name may finally be reinstated more than a century after researchers found the long-necked brontosaurus and apatosaurus likely belonged to the same genus, according to an analysis published today in the journal PeerJ.
The apatosaurus name, which was published first, took precedence while the brontosaurus was cast aside. Both dinosaurs lived approximately 150 million years ago.
"Until very recently, the claim that Brontosaurus was the same as Apatosaurus was completely reasonable, based on the knowledge we had," Emanuel Tschopp, one of the researchers from the New University of Lisbon in Portugal, said in a statement.
Setting out to analyze the differences between the large grouping of diplodocid dinosaurs, which include the apatosaurus and other dinosaurs characterized by their long necks and plant-based diets, Tschopp and his team said they did not expect to resurrect the brontosaurus.
Using more recently discovered fossil evidence of similar dinosaurs, researchers found enough distinctions that place the brontosaurus in its own genus, according to the study, which looked at 49 fossils.
They found the apotosaurus had a bulkier neck, while the slightly more slender brontosaurus stood out for a longer bone found in its ankles.
"The differences we found between Brontosaurus and Apatosaurus were at least as numerous as the ones between other closely related genera, and much more than what you normally find between species," Roger Benson, a professor at the University of Oxford and co-author of the study, said in a statement.
| no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://en.wikipedia.org/wiki/Brontosaurus | Brontosaurus - Wikipedia | The anatomy of Brontosaurus is well known, with fossils demonstrating that it was large, long-necked, and quadrupedal with a long tail terminating in a whip-like structure. The cervical vertebrae are notably extremely robust and heavily-built, in contrast to its lightly built relatives Diplodocus and Barosaurus. The forelimbs were short and stout whereas the hindlimbs were elongated and thick, supported respectively by a heavily built shoulder girdle and pelvis. Several size estimates have been made, with the largest species B. excelsus reaching up to 21–22 m (69–72 ft) from head to tail and weighing in at 15–17 t (17–19 short tons), whereas the smaller B. parvus only got up to 19 m (62 ft) long. Juvenile specimens of Brontosaurus are known, with younger individuals growing rapidly to adult size in as little as 15 years.
Brontosaurus has been classified within the family Diplodocidae, which was a group of sauropods that had shorter necks and longer tails compared to other families like brachiosaurs and mamenchisaurs. Diplodocids first evolved in the Middle Jurassic but peaked in diversity during the Late Jurassic with forms like Brontosaurus before becoming extinct in the Early Cretaceous. Brontosaurus is a genus in the subfamily Apatosaurinae, which includes only it and Apatosaurus, which are distinguished by their firm builds and thick necks. Although Apatosaurinae was named in 1929, the group was not used validly until an extensive 2015 paper, which found Brontosaurus to be valid. However, the status of Brontosaurus is still uncertain, with some paleontologists still considering it a synonym of Apatosaurus.
An 1896 diagram of the B. excelsus holotype skeleton by O.C. Marsh. The head is based on material now assigned to Brachiosaurus sp.
The discovery of a large and fairly complete sauropod skeleton was announced in 1879 by Othniel Charles Marsh, a professor of paleontology at Yale University. The specimen was collected from Morrison Formation rocks at Como Bluff, Wyoming by William Harlow Reed. He identified it as belonging to an entirely new genus and species, which he named Brontosaurus excelsus,[3] meaning "thunder lizard", from the Greek brontē/βροντη meaning "thunder" and sauros/σαυρος meaning "lizard",[4] and from the Latinexcelsus, "noble" or "high".[5] By this time, the Morrison Formation had become the center of the Bone Wars, a fossil-collecting rivalry between Marsh and another early paleontologist, Edward Drinker Cope. Because of this, the publications and descriptions of taxa by Marsh and Cope were rushed at the time.[6]Brontosaurus excelsus' type specimen (YPM 1980) was one of the most complete sauropod skeletons known at the time, preserving many of the characteristic but fragile cervical vertebrae.[7] Marsh believed that Brontosaurus was a member of the Atlantosauridae, a clade of sauropod dinosaurs he named in 1877 that also included Atlantosaurus and Apatosaurus.[7] A year later in 1880, another partial postcranial Brontosaurus skeleton was collected near Como Bluff by Reed,[8][9] including well-preserved limb elements.[10] Marsh named this second skeleton Brontosaurus amplus ("large thunder lizard") in 1881,[9] but it was considered a synonym of B. excelsus in 2015.[10]
In August 1883, Marshall P. Felch collected a disarticulated partial skull (USNM V 5730) of a sauropod further south in the Felch Quarry at Garden Park, Colorado and sent the specimen to Yale.[11][12] Marsh referred the skull to B. excelsus,[11][13] later featuring it in a skeletal reconstruction of the B. excelsus type specimen in 1891[13] and the illustration was featured again in Marsh's landmark publication, The Dinosaurs of North America, in 1896.[7] At the Yale Peabody Museum, the skeleton of Brontosaurus excelsus was mounted in 1931 with a skull based on the Marsh reconstruction of the Felch Quarry skull.[14] While at the time most museums were using Camarasaurus casts for skulls, the Peabody Museum sculpted a completely different skull based on Marsh's recon.[14][11] Marsh's skull was inaccurate for several other reasons: it included forward-pointing nasals, something truly different to any other dinosaur, and fenestrae differing from the drawing and other skulls. The mandible was based on a Camarasaurus'.[14] In 1998, the Felch Quarry skull that Marsh included in his 1896 skeletal restoration was suggested to belong to Brachiosaurus instead[11] and this was supported in 2020 with a redescription of the brachiosaurid material found at the Felch Quarry.[12]
During a Carnegie Museum expedition to Wyoming in 1901, William Harlow Reed collected another Brontosaurus skeleton, a partial postcranial skeleton of a young juvenile (CM 566), including partial limbs. However, this individual was found intermingled with a fairly complete skeleton of an adult (UW 15556).[15] The adult skeleton specifically was very well-preserved, bearing many cervical (neck) and caudal (tail) vertebrae, and is the most complete definite specimen of the species.[10] The skeletons were granted a new genus and species name, Elosaurus parvus ("little field lizard"), by Olof A. Peterson and Charles Gilmore in 1902.[15] Both of the specimens came from the Brushy Basin Member of the Morrison Formation. The species was later transferred to Apatosaurus by several authors[16][17] In 2008, a nearly complete postcranial skeleton of an apatosaurine was collected in Utah by crews working for Brigham Young University (BYU 1252-18531) where some of the remains are currently on display.[10] The skeleton is undescribed, but many of the features of the skeleton are shared with A. parvus.[10] The species was placed in Brontosaurus Tschopp et al. in 2015 during their comprehensive study of Diplodocidae.[18][10]
Infographic explaining the history of Brontosaurus and Apatosaurus according to Tschopp et al. 2015
In the 1903 edition of Geological Series of the Field Columbian Museum,Elmer Riggs argued that Brontosaurus was not different enough from Apatosaurus to warrant a separate genus, so he created the new combination Apatosaurus excelsus for it. Riggs stated that "In view of these facts the two genera may be regarded as synonymous. As the term 'Apatosaurus' has priority, 'Brontosaurus' will be regarded as a synonym".[19] Nonetheless, before the mounting of the American Museum of Natural History specimen, Henry Fairfield Osborn chose to label the skeleton "Brontosaurus", though he was a strong opponent of Marsh and his taxa.[14][20]
In 1905, the American Museum of Natural History (AMNH) unveiled the first-ever mounted skeleton of a sauropod, a composite specimen (mainly made of bones from AMNH 460) that they referred to as Brontosaurus excelsus. The AMNH specimen was very complete, only missing the feet, from the specimen AMNH 592 were added to the mount, lower leg and shoulder bones, added from AMNH 222, and tail bones, added from AMNH 339.[21] To finish the mount, the rest of the tail was fashioned to appear as Marsh believed it should, which meant it had too few vertebrae. In addition, a sculpted model of what the museum felt the skull of this massive creature might have looked like was placed on the skeleton. This was not a delicate skull like that of Diplodocus, which would later turn out to be more accurate, but was based on "the biggest, thickest, strongest skull bones, lower jaws, and tooth crowns from three different quarries".[22][19][23][24] These skulls were likely those of Camarasaurus, the only other sauropod of which good skull material was known at the time. The mount construction was overseen by Adam Hermann, who failed to find Brontosaurus skulls. Hermann was forced to sculpt a stand-in skull by hand. Henry Fairfield Osborn noted in a publication that the skull was "largely conjectural and based on that of Morosaurus" (now Camarasaurus).[14]
In 1909, an Apatosaurus skull was found, during the first expedition to what would become the Carnegie Quarry at Dinosaur National Monument, led by Earl Douglass. The skull was found a few meters away from a skeleton (specimen CM 3018) identified as the new species Apatosaurus louisae. The skull was designated CM 11162 and was very similar to the skull of Diplodocus. It was accepted as belonging to the Apatosaurus specimen by Douglass and Carnegie Museum director William J. Holland, although other scientists, most notably Osborn, rejected this identification. Holland defended his view in 1914 in an address to the Paleontological Society of America, yet he left the Carnegie Museum mount headless. While some thought Holland was attempting to avoid conflict with Osborn, others suspected that Holland was waiting until an articulated skull and neck were found to confirm the association of the skull and skeleton.[14] After Holland's death in 1934, a cast of a Camarasaurus skull was placed on the mount by museum staff.[20]
No apatosaurine skull was mentioned in the literature until the 1970s when John Stanton McIntosh and David Berman redescribed the skulls of Diplodocus and Apatosaurus in 1975.[24] They found that though he never published his opinion, Holland was almost certainly correct in that Apatosaurus and Brontosaurus had a Diplodocus-like skull.[24] According to them, many skulls long thought to belong to Diplodocus might instead be those of Apatosaurus.[24] They reassigned multiple skulls to Apatosaurus based on associated and closely associated vertebrae.[24] Though they supported Holland, Apatosaurus was falsely theorized to possibly have possessed a Camarasaurus-like skull, based on a disarticulated Camarasaurus-like tooth found at the precise site where an Apatosaurus specimen was found years before.[24] However, this tooth does not come from Apatosaurus.[25] On October 20, 1979, after the publications by McIntosh and Berman, the first skull of an Apatosaurus was mounted on a skeleton in a museum, that of the Carnegie.[20] In 1995, the American Museum of Natural History followed suit, and unveiled their remounted skeleton (now labelled Apatosaurus excelsus) with a corrected tail and a new skull cast from A. louisae.[21] In 1998, Robert T. Bakker referred a skull and mandible of an apatosaurine from Como Bluff to Brontosaurus excelsus (TATE 099-01), though the skull is still undescribed.[26] In 2011, the first specimen of Apatosaurus where a skull was found articulated with its cervical vertebrae was described. This specimen, CMC VP 7180, was found to differ in both skull and neck features from A. louisae, and the specimen was found to have a majority of features related to those of A. ajax.[27]
Another specimen of an Apatosaurine now referred to Brontosaurus was discovered in 1993 by the Tate Geological Museum, also from the Morrison Formation of central Wyoming. The specimen consisted of a partial postcranial skeleton, including a complete manus and multiple vertebrae, and was described by James Filla and Pat Redman a year later.[26] Filla and Redman named the specimen Apatosaurus yahnahpin ("yahnahpin-wearing deceptive lizard"), but Robert T. Bakker gave it the genus name Eobrontosaurus in 1998.[26] Bakker believed that Eobrontosaurus was the direct predecessor to Brontosaurus,[26] although Tschopp et al.'s phylogenetic analysis placed B. yahnahpin as the basalmost species of Brontosaurus.[10]
Almost all 20th-century paleontologists agreed with Riggs that all Apatosaurus and Brontosaurus species should be classified in a single genus. According to the rules of the ICZN, which governs the scientific names of animals, the name Apatosaurus, having been published first, had priority; Brontosaurus was considered a junior synonym and was therefore discarded from formal use.[28][29][30][31] Despite this, at least one paleontologist—Robert T. Bakker—argued in the 1990s that A. ajax and A. excelsus are sufficiently distinct that the latter continues to merit a separate genus.[26] In 2015, an extensive study of diplodocid relationships by Emanuel Tschopp, Octavio Mateus, and Roger Benson concluded that Brontosaurus was indeed a valid genus of sauropod distinct from Apatosaurus. The scientists developed a statistical method to more objectively assess differences between fossil genera and species and concluded that Brontosaurus could be "resurrected" as a valid name. They assigned two former Apatosaurus species, A. parvus, and A. yahnahpin, to Brontosaurus, as well as the type species B. excelsus.[10] The publication was met with some criticism from other paleontologists, including Michael D'Emic,[32]Donald Prothero, who criticized the mass media reaction to this study as superficial and premature,[33] and many others below. Some paleontologists like John and Rebecca Foster continue to consider Brontosaurus as a synonym of Apatosaurus.[34][35]
Brontosaurus was a large, long-necked, quadrupedal animal with a long, whip-like tail, and forelimbs that were slightly shorter than its hindlimbs. The largest species, B. excelsus, measured up to 21–22 m (69–72 ft) long from head to tail and weighed up to 15–17 t (17–19 short tons); other species were smaller, measuring 19 m (62 ft) long and weighing 14 t (15 short tons).[36][37] The skull of Brontosaurus has not been found but was probably similar to the skull of the closely related Apatosaurus. Several skulls of Apatosaurus have been found, all of which are very small in proportion to the body. Their snouts were squared off and low, in contrast to macronarians'.[38] Jaws of Apatosaurus and other diplodocids were lined with spatulate (chisel-like) teeth which were adapted for herbivory.[25][27]
A cervical (top) and dorsal vertebra (bottom) of B. excelsus.Comparison of three specimens and a human: Oklahoma specimen of Apatosaurus ajax (orange), A. louisae (red), and Brontosaurus parvus (green)
Like those of other diplodocids, the vertebrae of the neck were deeply bifurcated on the dorsal side; that is, they carried paired spines, resulting in a wide and deep neck.[39] The spine and tail consisted of 15 cervicals, ten dorsals, five sacrals, and about 82 caudals, based on Apatosaurus. The number of caudal vertebrae has been noted to vary, even within a species. Vertebrae in the neck, torso, and sacrum of sauropods bore large pneumatic foramina on their lateral sides.[9] These are used to lighten the bones which aided in keeping the animal lighter. Within the vertebrae as well, smooth bone walls in addition to diverticula would make pockets of air to keep the bones light.[40] Similar structures are observable in birds and large mammals.[41] The cervical vertebrae were stouter than those of other diplodocids, as in Apatosaurus. On the lateral sides of the cervicals, apatosaurines had well-developed and thick parapophyses (extensions on the lateral sides of the vertebrae that attached to cervical ribs) which would point ventrally under the centrum. These parapophyses in conjunction with dense diapophyses and cervical ribs were strong anchors for neck muscles, which could sustain extreme force.[42] The cervicals were also more boxy than in other sauropods due to their truncated zygapophyses and tall build.[43][10] These vertebrae are triangular in anterior view, whereas they most often are rounded or square in genera like Camarasaurus. Despite its pneumaticy, the neck of Brontosaurus is thought to have been double the mass of that of other diplodocids due to the former’s sturdiness.[42]Brontosaurus differs from Apatosaurus in that the base of the posterior dorsal vertebrae's neural spines are longer than they are wide. The cervicals of species within Brontosaurus also vary, such as the lack of tubercules on the neural spines of B. excelsus and the lateral expansion of unbifurcated neural spines in B. parvus.[10]
Its dorsal vertebrae had short centra with large fossae (shallow excavations) on their lateral sides, though not as extensively as the cervicals’.[44] Neural canals, which contain the spinal chord of the vertebral column, are ovate and large in the dorsals. The diapophyses protrude outward and curve downward in a hook-shape. Neural spines are thick in anterior-posterior view with a bifurcate top.[10] The neural spines of the dorsals would increase in height further towards the tail, creating an arched back. Apatosaurine neural spines compose more than half the height of the vertebrae. Medial surfaces of neural spines are gently rounded in B. yahnahpin, whereas in other B. spp. they are not.[10] The dorsal ribs are not fused or tightly attached to their vertebrae, instead being loosely articulated.[22] Ten dorsal ribs are on either side of the body.[19] Expanded excavations within the sacrum are present making it into a hollow cylinder-shape. Sacral neural spines are fused together into a thin plate. The posteriormost caudal vertebra was lightly fused to the sacral vertebrae, becoming part of the plate. Internally, the neural canal was enlarged.[45][46][19] The shape of the tail was typical of diplodocids, being comparatively slender, due to the vertebral spines rapidly decreasing in height the farther they are from the hips. As in other diplodocids, the last portion of the tail of Brontosaurus possessed a whip-like structure.[22] The tail also bears an extensive air-sac system to lighten its weight, as observed in specimens of B. parvus.[47][48]
Several scapulae are known from Brontosaurus, all of which are long and thin with relatively elongated shafts.[45] One of traits that distinguishes Brontosaurus and Apatosaurus is the presence of a depression on the posterior face of the scapula, which the latter lacks. The scapula of Brontosaurus also has a rounded extension off of its edge, a characteristic unique to Brontosaurus among Apatosaurinae.[10] The coracoid anatomy is clocely akin to that of Apatosaurus, with a quadratic outline in dorsal view. Sterna have been preserved in some specimens of Brontosaurus, which display an oval outline.[9] The hip bones include robust ilia and the fused pubes and ischia. The limb bones were also very robust,[49] with the humerus resembling that of Camarasaurus, and those of B. excelsus being nearly identical to those of Apatosaurus ajax. The humerus had a thin bone shaft and larger transverse ends. Its anterior end bears a large deltopectoral crest, which was on the extremities of the bone.[50] Charles Gilmore in 1936 noted that previous reconstructions erroneously proposed that the radius and ulna could cross, when in life they would have remained parallel.[22]Brontosaurus had a single large claw on each forelimb which faced towards the body, whereas the rest of the phalanges lacked unguals.[51] Even by 1936, it was recognized that no sauropod had more than one hand claw preserved, and this one claw is now accepted as the maximum number throughout the entire group.[22][52] The metacarpals are elongated and thinner than the phalanges, bearing boxy articular ends on its proximal and distal faces.[7] The single front claw bone is slightly curved and squarely shortened on the front end. The phalangeal formula is 2-1-1-1-1, meaning the innermost finger (phalanx) on the forelimb has two bones and the next has one. The single manual claw bone (ungual) is slightly curved and squarely truncated on the anterior end. Proportions of the manus bones vary within Apatosaurinae as well, with B. yahnahpin's ratio of longest metacarpal to radius length around 0.40 or greater compared to a lower value in Apatosaurus louisae.[10] The femora of Brontosaurus are very stout and represent some of the most robust femora of any member of Sauropoda. The tibia and fibula bones are different from the slender bones of Diplodocus but are nearly indistinguishable from those of Camarasaurus. The fibula is longer and slenderer than the tibia. The foot of Brontosaurus has three claws on the innermost digits; the digit formula is 3-4-5-3-2. The first metatarsal is the stoutest, a feature shared among diplodocids[22]B. excelsus'astragalus differs from other species in that it lacks a laterally directed ventral shelf.[10]
Brontosaurus is a member of the family Diplodocidae, a clade of gigantic sauropod dinosaurs. The family includes some of the longest and largest creatures ever to walk the earth, including Diplodocus, Supersaurus, and Barosaurus. Diplodocids first evolved during the Middle Jurassic in what is now Georgia, spreading to North America during the Late Jurassic.[53]Brontosaurus is classified in the subfamily Apatosaurinae, which also includes Apatosaurus and possibly one or more unnamed genera.[10] Othniel Charles Marsh described Brontosaurus as being allied to Atlantosaurus, within the now defunct group Atlantosauridae.[19][54] In 1878, Marsh raised his family to the rank of suborder, including Apatosaurus, Brontosaurus, Atlantosaurus, Morosaurus (=Camarasaurus), and Diplodocus. He classified this group within Sauropoda. In 1903, Elmer S. Riggs mentioned that the name Sauropoda would be a junior synonym of earlier names, and grouped Apatosaurus within Opisthocoelia.[19] Most authors still use Sauropoda as the group name.[17]
Originally named by its discoverer Othniel Charles Marsh in 1879, Brontosaurus had long been considered a junior synonym of Apatosaurus; its type species, Brontosaurus excelsus, was reclassified as A. excelsus in 1903. However, an extensive study published in 2015 by a joint British-Portuguese research team concluded that Brontosaurus was a valid genus of sauropod distinct from Apatosaurus.[10][55][56] Nevertheless, not all paleontologists agree with this division.[57][33] The same study classified two additional species that had once been considered Apatosaurus and Eobrontosaurus as Brontosaurus parvus and Brontosaurus yahnahpin respectively.[10]
Brontosaurus excelsus, the type species of Brontosaurus, was first named by Marsh in 1879. Many specimens have been assigned to the species, such as FMNH P25112, the skeleton mounted at the Field Museum of Natural History, which has since been found to represent an unknown species of apatosaurine. Brontosaurus amplus, is a junior synonym of B. excelsus. B. excelsus therefore only includes its type specimen and the type specimen of B. amplus.[10][17] The largest of these specimens is estimated to have weighed up to 15 tonnes and measured up to 22 m (72 ft) long from head to tail.[36] The known definitive B. excelsus fossils have been reported from Reed's Quarries 10 and 11 of the Morrison Formation Brushy Basin member in Albany County, Wyoming, dated to the late Kimmeridgian age,[10][31] about 152 million years ago.
Brontosaurus parvus, first described as Elosaurus in 1902 by Peterson and Gilmore, was reassigned to Apatosaurus in 1994, and to Brontosaurus in 2015. Specimens assigned to this species include the holotype, CM 566 (a partial skeleton of a juvenile found in Sheep Creek Quarry 4 in Albany County, WY), BYU 1252-18531 (a nearly complete skeleton found in Utah and mounted at Brigham Young University), and the partial skeleton UW 15556. It dates to the middle Kimmeridgian.[17] Adult specimens are estimated to have weighed up to 14 tonnes and measured up to 22 m (72 ft) long from head to tail.[36]Left front limb of B. yahnahpin, Morrison Natural History Museum
Brontosaurus yahnahpin is the oldest species, known from a single site from the lower Morrison Formation, Bertha Quarry, in Albany County, Wyoming, dating to about 155 million years ago.[58][59] It grew up to 21 m (69 ft) long.[60] It was described by James Filla and Patrick Redman in 1994 as a species of Apatosaurus (A. yahnahpin).[61] The specific name is derived from Lakotamah-koo yah-nah-pin, "breast necklace", a reference to the pairs of sternal ribs that resemble the hair pipes traditionally worn by the tribe. The holotype specimen is TATE-001, a relatively complete postcranial skeleton found in the lower Morrison Formation of Wyoming. More fragmentary remains have also been referred to the species. A re-evaluation by Robert T. Bakker in 1998 found it to be more primitive, so Bakker coined the new generic name Eobrontosaurus, derived from Greek eos, "dawn", and Brontosaurus.[26]
The cladogram below is the result of an analysis by Tschopp, Mateus, and Benson (2015). The authors analyzed most diplodocid type specimens separately to deduce which specimen belonged to which species and genus.[10]
When Brontosaurus was described in 1879, the widespread notion in the scientific community was that sauropods were semi-aquatic, lathargic reptiles that were inactive.[62][3][7] In Othniel Marsh's publication The Dinosaurs of North America, he described the dinosaur as "more or less amphibious, and its food was probably aquatic plants or other succulent vegetation".[7] This is unsupported by fossil evidence. Instead, sauropods were active and had adaptations for dwelling on land.[28] Marsh also noted the animal's supposed lack of intellect based on the small braincase of the Felch Quarry skull and slender neural cord. Recent research has found signs of intelligence in dinosaurs, akin to modern birds, though sauropods had relatively small brains.[63]
Various uses for the single claw on the forelimb of sauropods have been proposed. One suggestion is that they were used for defense, but their shape and size make this unlikely. It was also possible they were for foraging, but the most probable use for the claw was grasping objects such as tree trunks when rearing.[52]
Trackways of sauropods like Brontosaurus show that the average range for them was around 20–40 km (10–25 mi) per day, and they could potentially reach a top speed of 20–30 km/h (12–19 mph).[64] The slow locomotion of sauropods may be due to the minimal muscling or recoil after strides.[65] A possible bipedal trackway of a juvenile Apatosaurus is known, but it is disputed if it was possible for the sauropod.[66]
Being a diplodocid sauropod, Brontosaurus was herbivorous and fed on ferns, cycadeoids, seed ferns, and horsetails, eating at ground height as a nonselective browser.[38] The replacement method and physiology of Apatosaurus' teeth is unique, with the entire tooth row being replaced at once and up to 60% more often than Diplodocus. The teeth of Apatosaurus are thick, lack denticles, and are strongly cylindrical in cross-section whereas they are long, slender, and elliptical in cross-section in Diplodocus. These characteristics imply that Apatosaurus, and likely Brontosaurus, consumed tougher vegetation than Diplodocus.[25] Diplodocids in general also have shorter necks than the long-necked, vertically inclined macronarians. This would result in niche partitioning, the various taxa thus avoiding direct competition with each other due to feeding on different plants and at different heights.[67] Hypotheses of the food requirements of Brontosaurus have been made, though predicting this is difficult due to the lack of modern analogues.[68]Endotherms (mammals) and ectotherms (reptiles) require a specific amount of nutrition to survive which correlates with their metabolism as well as body size. Estimations of the dietary necessities of Brontosaurus were made in 2010, with a guess of 2•10^4 to 50•10^4 kilojoules needed daily. This led to hypotheses on the distributions of Brontosaurus to meet this requirement, though they varied on whether it was an ectotherm or endotherm. If Brontosaurus was an endotherm, fewer adult individuals could be sustained than if it were an ectotherm, which could have tens of animals per square kilometer.[69][70] Due to this, it has been theorized that Brontosaurus and other sauropods living within the arid environment of the Morrison Formation participated in migrations between feeding sites.[68] James Farlow (1987) calculates that a Brontosaurus-sized dinosaur about 35 t (34 long tons; 39 short tons) would have possessed 5.7 t (5.6 long tons; 6.3 short tons) of fermentation contents.[71] Assuming Apatosaurus had an avian respiratory system and a reptilian resting-metabolism, Frank Paladino etal. (1997) estimate the animal would have needed to consume only about 262 liters (58 imp gal; 69 U.S. gal) of water per day.[72]
Historically, sauropods like Brontosaurus were believed to have been too massive to support their weight on dry land, so theoretically, they must have lived partly submerged in water, perhaps in swamps. Recent findings do not support this, and sauropods are thought to have been fully terrestrial animals.[73] Diplodocids like Brontosaurus are often portrayed with their necks held high up in the air, allowing them to browse on tall trees. Though some studies have suggested that diplodocid necks were less flexible than previously believed,[74] other studies have found that all tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture, and argue that the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from that of other animals.[75]
James Spotila et al. (1991) suggest that the large body size of Brontosaurus and other sauropods would have made them unable to maintain high metabolic rates, as they would not be able to release enough heat. However, temperatures in the Jurassic were 3 degrees Celsius higher than present.[76] Furthermore, they assumed that the animals had a reptilian respiratory system. Matt Wedel found that an avian system would have allowed them to dump more heat.[77] Some scientists have also argued that the heart would have had trouble sustaining sufficient blood pressure to oxygenate the brain.[73]
Given the large body mass and long neck of sauropods like Brontosaurus, physiologists have encountered problems determining how these animals breathed. Beginning with the assumption that, like crocodilians, Brontosaurus did not have a diaphragm, the dead-space volume (the amount of unused air remaining in the mouth, trachea, and air tubes after each breath) has been estimated at 0.184 m3 (184 L) for a 30 t (30 long tons; 33 short tons) specimen. Paladino calculates its tidal volume (the amount of air moved in or out during a single breath) at 0.904 m3 (904 L) with an avian respiratory system, 0.225 m3 (225 L) if mammalian, and 0.019 m3 (19 L) if reptilian.[72]
Based on this, its respiratory system would likely have consisted of parabronchi, with multiple pulmonary air sacs as in avian lungs, and a flow-through lung. An avian respiratory system would need a lung volume of about 0.60 m3 (600 L) compared with a mammalian requirement of 2.95 m3 (2,950 L), which would exceed the space available. The overall thoracic volume of the same-sized Apatosaurus has been estimated at 1.7 m3 (1,700 L), allowing for a 0.50 m3 (500 L), four-chambered heart and a 0.90 m3 (900 L) lung capacity. That would allow about 0.30 m3 (300 L) for the necessary tissue.[72] Evidence for the avian system in Brontosaurus and other sauropods is also present in the pneumaticity of the vertebrae. Though this plays a role in reducing the weight of the animal, Wedel (2003) states they are also likely connected to air sacs, as in birds.[77]
A 1999 microscopic study of Apatosaurus and Brontosaurus bones concluded the animals grew rapidly when young and reached near-adult sizes in about 10years.[78] In 2008, a study on the growth rates of sauropods was published by biologists Thomas Lehman and Holly Woodward. They said that by using growth lines and length-to-mass ratios, Apatosaurus would have grown to 25t (25 long tons; 28 short tons) in 15years, with growth peaking at 5,000 kg (11,000 lb) in a single year. An alternative method, using limb length and body mass, found Brontosaurus and Apatosaurus grew 520 kg (1,150 lb) per year, and reached their full mass before it was about 70years old.[79] These estimates have been called unreliable because the calculation methods are not sound; old growth lines would have been obliterated by bone remodeling.[80] One of the first identified growth factors of Apatosaurus was the number of sacral vertebrae, which increased to five by the time of the creature's maturity. This was first noted in 1903 and again in 1936.[22][19]
Juvenile Brontosaurus material is known based on the type specimen of B. parvus. The material of this specimen, CM 566, includes vertebrae from various regions, one pelvic bone, and some bones of the hindlimb.[17] When describing B. parvus, Peterson and Gilmore noted that the neural spines were sutured, the sacral vertebrae were unfused, and the coracoid was missing. All of these features are signs of immaturity in other archosaurs, showing that sauropods had these traits too.[15] Peterson and Gilmore also theorized that sauropods never stopped growing, which supposedly helped in attaining their massive size, a concept unsupported by fossils.[81]
An article that appeared in the November 1997 issue of Discover magazine reported research into the mechanics of diplodocid tails by Nathan Myhrvold, a computer scientist from Microsoft. Myhrvold carried out a computer simulation of the tail, which in diplodocids like Brontosaurus was a very long, tapering structure resembling a bullwhip. This computer modeling suggested that sauropods were capable of producing a whip-like cracking sound of over 200 decibels, comparable to the volume of a cannon.[82] There is some circumstantial evidence supporting this as well: a number of diplodocids have been found with fused or damaged tail vertebrae, which may be a symptom of cracking their tails: these are particularly common between the 18th and the 25th caudal vertebra, a region the authors consider a transitional zone between the stiff muscular base and the flexible whiplike section.[83] However, Rega (2012) notes that Camarasaurus while lacking a tailwhip, displays a similar level of caudal co-ossification and that Mamenchisaurus while having the same pattern of vertebral metrics, lacks a tailwhip and does not display fusion in any "transitional region". Also, the crush fractures which would be expected if the tail was used as a whip have never been found in diplodocids.[84] More recently, Baron (2020) has considered the use of the tail as a bullwhip unlikely because of the potentially catastrophic muscle and skeletal damage such speeds could cause on the large and heavy tail. Instead, he proposes that the tails might have been used as a tactile organ to keep in touch with the individuals behind and to the sides of the animal in a group, which could have augmented cohesion and allowed communication among individuals while limiting more energetically demanding activities like stopping to search for dispersed individuals, turning to visually check on others behind, or communicating vocally.[85]
The cervical vertebrae of Brontosaurus and Apatosaurus are robust, which has led to speculation on the use of these structures. These structures had expensive energy requirements, so the reason for their evolution must have been important to the animal. notable features include dense cervical ribs and diapophyses, ribs that are angled ventrally, and an overall subtriangular cross-section. These traits are in contrast to the more fragile cervicals of diplodocines.[86] Cervical ribs acted as anchors for the longus colli ventralis and flexer colli lateralis muscles, which are used in the downward motion of the neck. Stronger muscles for ventral motions allowed more force to be exerted downward. The cervical ribs formed a "V"-shape, which could be used to shelter the softer underlying tissues of the neck from damage. Ventral sides of the cervical ribs were capped by round, protruding processes. These have been suggested to have been attachment points for bosses or keratinous spikes. A preprint by Wedel et al (2015) thought that due to the combination of these traits, Brontosaurus would use its neck for combat between individuals through the use of striking necks.[42][87] Behavior like this has been observed in other animals like giraffes and large tortoises.[88][89]
The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, ranges between 156.3 million years old (Mya) at its base,[90] and 146.8 Mya at the top,[91] which places it in the late Oxfordian, Kimmeridgian, and early Tithonian stages of the Late Jurassic period. This formation is interpreted as a semiarid environment with distinct wet and dry seasons. The Morrison Basin, where dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan and was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains.[92] This formation is similar in age to the Lourinhã Formation in Portugal and the Tendaguru Formation in Tanzania.[93]
Restoration of a B. excelsus group
Brontosaurus may have been a more solitary animal than other Morrison Formation dinosaurs.[94] As a genus, Brontosaurus existed for a long interval, and was found in most levels of the Morrison. B. excelsus fossils have been reported from only the Brushy Basin Member, dating to the late Kimmeridgian age, about 151 Mya.[59] Older Brontosaurus remains have also been identified from the middle Kimmeridgian, and are assigned to B. parvus.[17] Fossils of these animals have been found in Nine Mile Quarry and Bone Cabin Quarry in Wyoming and at sites in Colorado, Oklahoma, and Utah, present in stratigraphic zones 2–6 according to John Foster’s model.[95]
The length of time taken for Riggs's 1903 reclassification of Brontosaurus as Apatosaurus to be brought to public notice, as well as Osborn's insistence that the Brontosaurus name be retained despite Riggs's paper, meant that Brontosaurus became one of the most famous dinosaurs. Brontosaurus has often been depicted in cinema, beginning with Winsor McCay's 1914 classic Gertie the Dinosaur, one of the first animated films.[98] McCay based his unidentified dinosaur on the apatosaurine skeleton in the American Museum of Natural History.[99] The 1925 silent film The Lost World featured a battle between a Brontosaurus and an Allosaurus, using special effects by Willis O'Brien.[100] The 1933 film King Kong featured a Brontosaurus chasing Carl Denham, Jack Driscoll and the terrified sailors on Skull Island. These, and other early uses of the animal as a major representative of the group, helped cement Brontosaurus as a quintessential dinosaur in the public consciousness.[101]
Sinclair Oil Corporation has long been a fixture of American roads (and briefly in other countries) with its green dinosaur logo and mascot, a Brontosaurus. While Sinclair's early advertising included a number of different dinosaurs, eventually only Brontosaurus was used as the official logo, due to its popular appeal.[102]
Gertie the Dinosaur (1914)
As late as 1989, the U.S. Postal Service caused controversy when it issued four "dinosaur" stamps: Tyrannosaurus, Stegosaurus, Pteranodon, and Brontosaurus. The use of the term Brontosaurus in place of Apatosaurus led to complaints of "fostering scientific illiteracy."[103] The Postal Service defended itself (in Postal Bulletin 21744) by saying, "Although now recognized by the scientific community as Apatosaurus, the name Brontosaurus was used for the stamp because it is more familiar to the general population." Indeed, the Postal Service even implicitly rebuked the somewhat inconsistent complaints by adding that "[s]imilarly, the term 'dinosaur' has been used generically to describe all the animals [i.e., all four of the animals represented in the given stamp set], even though the Pteranodon was a flying reptile [rather than a true 'dinosaur']," a distinction left unmentioned in the numerous correspondence regarding the Brontosaurus/Apatosaurus issue. Palaeontologist Stephen Jay Gould supported this position. In the essay from which the title of the collection Bully for Brontosaurus is taken, Gould wrote: "Touché and right on; no one bitched about Pteranodon, and that's a real error."[101] His position, however, was not one suggesting the exclusive use of the popular name; he echoed Riggs' original argument that Brontosaurus is a synonym for Apatosaurus. Nevertheless, he noted that the former has developed and continues to maintain an independent existence in the popular imagination.[101]
The more vociferous denunciations of the usage have elicited sharply defensive statements from those who would not wish to see the name be struck from official usage.[101] Tschopp's study[10] has generated a very high number of responses from many, often opposed, groups—of editorial,[104] news staff,[55][105] and personal blog nature (both related[106][107] and not[108]), from both[109] sides of the debate, from related[18] and unrelated contexts, and from all over the world.[110]
Since Wedel et al's 2015 preprint,[42] various reconstructions of Brontosaurus individuals engaging in intraspecific combat based on their study have been made. The art typically depicts the neck-battling hypothesis stipulated by their research. Many of these works are published online under the hashtag "#BrontoSmash".[111][112]
^Filla, J.A., Redman, P.D. (1994). "Apatosaurus yahnahpin: a preliminary description of a new species of diplodocid dinosaur from the Late Jurassic Morrison Formation of southern Wyoming, the first sauropod found with a complete set of "belly ribs"." Wyoming Geological Association, 44th Annual Field Conference Guidebook. 159–178.
^Mateus, Octávio (2006). "Jurassic dinosaurs from the Morrison Formation (USA), the Lourinhã and Alcobaça Formations (Portugal), and the Tendaguru Beds (Tanzania): A comparison". In Foster, John R.; Lucas, Spencer G. (eds.). Paleontology and Geology of the Upper Jurassic Morrison Formation. New Mexico Museum of Natural History and Science Bulletin, 36. Albuquerque, New Mexico: New Mexico Museum of Natural History and Science. pp. 223–231.
^Carpenter, Kenneth (2006). "Biggest of the big: a critical re-evaluation of the mega-sauropod Amphicoelias fragillimus". In Foster, John R.; Lucas, Spencer G. (eds.). Paleontology and Geology of the Upper Jurassic Morrison Formation. New Mexico Museum of Natural History and Science Bulletin, 36. Albuquerque, New Mexico: New Mexico Museum of Natural History and Science. pp. 131–138. | ] Despite this, at least one paleontologist—Robert T. Bakker—argued in the 1990s that A. ajax and A. excelsus are sufficiently distinct that the latter continues to merit a separate genus.[26] In 2015, an extensive study of diplodocid relationships by Emanuel Tschopp, Octavio Mateus, and Roger Benson concluded that Brontosaurus was indeed a valid genus of sauropod distinct from Apatosaurus. The scientists developed a statistical method to more objectively assess differences between fossil genera and species and concluded that Brontosaurus could be "resurrected" as a valid name. They assigned two former Apatosaurus species, A. parvus, and A. yahnahpin, to Brontosaurus, as well as the type species B. excelsus.[10] The publication was met with some criticism from other paleontologists, including Michael D'Emic,[32]Donald Prothero, who criticized the mass media reaction to this study as superficial and premature,[33] and many others below. Some paleontologists like John and Rebecca Foster continue to consider Brontosaurus as a synonym of Apatosaurus.[34][35]
Brontosaurus was a large, long-necked, quadrupedal animal with a long, whip-like tail, and forelimbs that were slightly shorter than its hindlimbs. The largest species, B. excelsus, measured up to 21–22 m (69–72 ft) long from head to tail and weighed up to 15–17 t (17–19 short tons); other species were smaller, measuring 19 m (62 ft) long and weighing 14 t (15 short tons).[36][37] The skull of Brontosaurus has not been found but was probably similar to the skull of the closely related Apatosaurus. Several skulls of Apatosaurus have been found, all of which are very small in proportion to the body. | no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://www.nature.com/articles/nature.2015.17257 | Beloved Brontosaurus makes a comeback | Nature | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
Beloved Brontosaurus makes a comeback
Subjects
The name Brontosaurus has endured in popular culture, such as this 1989 US stamp.
Credit: USPS
The Brontosaurus is back. Dinosaur fossils that were originally described as Brontosaurus excelsus in 1879 and later renamed should indeed be classified as Brontosaurus, a study of dozens of dinosaur specimens concludes.
That may not sit well with palaeontology aficionados, who love to point out that Brontosaurus has not been a valid taxonomic name since the early twentieth century. (Just ask the US Postal Service, which was roundly criticized after it released a Brontosaurus postage stamp in 1989.)
The rise, fall and now rise of the Brontosaurus has its roots in the ‘bone wars’ of nineteenth-century palaeontology. While some prospectors dug up the American West in search of mineral fortunes in the middle to late 1800s, others looked for giant lizards. A race between palaeontologists Edward Cope and Othniel Marsh defined the era.
“Cope and Marsh were big rivals,” says Emanuel Tschopp, a palaeontologist at the Nova University of Lisbon, Portugal, who led the latest study, published on 7 April in the journal PeerJ1. “They really rushed new species into press as fast as possible, and many of these reference specimens on which they based new species are extremely fragmentary and are not comparable directly.”
Working in Colorado’s Morrison formation in 1877, Marsh’s field crew uncovered the gargantuan bones of a species he dubbed Apatosaurus ajax — a genus name that translates to deceptive lizard, and a species name that references the Greek hero Ajax. Two years later, Marsh found another giant dinosaur in the same rock formation and named it Brontosaurus excelsus, the noble thunder lizard.
In the early 1900s, after discovering a fossil that was similar to both Brontosaurus and Apatosaurus, other researchers decided that the two dinosaurs were distinct species of the same genus. Subsequent studies only raised further questions about the status of Brontosaurus.
Palaeontologists eventually agreed that Brontosaurus is properly called Apatosaurus, under taxonomic rules drafted by the eighteenth-century Swedish systematist Carl Linnaeus and still in use today. The rules state that the first name given for an animal takes priority. The bones attributed to Brontosaurus excelsus, therefore, belonged to Apatosaurus excelsus.
The first known Brontosaurus fossil was unearthed in the Morrison formation in Colorado.
Credit: Davide Bonadonna
Drawing a family tree
Tschopp didn't set out to resurrect the Brontosaurus when he started analysing different specimens of diplodocid — the group to which Apatosaurus, Diplodocus and other giants belong. But he was interested in reviewing how the fossils had been classified and whether anatomical differences between specimens represented variation within species, or between species or genera. Tschopp and his colleagues analysed nearly 500 anatomical traits in dozens of specimens belonging to all of the 20 or so species of diplodocids to create a family tree. They spent five years amassing data, visiting 20 museums across Europe and the United States.
Very broadly, their tree confirmed established ideas about the evolutionary relationships among diplodocids. But the scientists also concluded that Apatosaurus and Brontosaurus were different enough to belong in their own genera. Many of the anatomical differences between the two dinosaurs are obscure, Tschopp says, but Apatosaurus’s stouter neck is an obvious one. “Even though both are very robust and massive animals, Apatosaurus is even more so,” he adds.
Tschopp and his team thought very carefully about their decision to reinstate Brontosaurus, and they expect some pushback. “We knew it would be a major finding because Brontosaurus is such a popular name,” he says. “I’m pretty sure there will be a scientific discussion around this. I hope there will be. That’s how science works.”
The resurrection of Brontosaurus may grab all the headlines, but the analysis also reshuffles some other dinosaurs. A species called Diplodocus hayi got its own genus, Galeamopus. Meanwhile, the team determined that a dinosaur from Portugal called Dinheirosaurus belongs in the genus Supersaurus, remains of which have been found only in North America.
The name game
The paper represents “the best current view” of diplodocids, says Michael Benton, a vertebrate palaeontologist at the University of Bristol, UK. The traits that distinguish Brontosaurus from Apatosaurus are in line with characteristics that define other genera of sauropod, the larger dinosaur group to which diplodocids belong.
“The discrimination of Brontosaurus from Apatosaurus will be startling,” he says. “It’s the classic example we always use to explain the meaning of ‘synonym’ to students, or as an example of the speed and dastardly deeds of Marsh and Cope as they each rushed to name new taxa, sometimes the same beast.”
Philip Mannion, a palaeontologist at Imperial College London, says the study is important not only for its resurrection of Brontosaurus. By determining which diplodocid bones fall under which species and genus, it should make it easier for palaeontologists to correctly classify new finds while helping them to understand the evolution of some of the largest dinosaurs that lived.
“The public is going to get a lot out of this because Brontosaurus has this very prominent place in the public imagination,” says Mannion, who has a personal stake in the issue.
Several years ago, he was contacted by a poster company asking whether Brontosaurus was a valid dinosaur name. “A father had bought a poster for his child, and the child straight away said Brontosaurus isn’t a real dinosaur,” remembers Mannion, who told the company that the kid was right.
Will Mannion now tell the firm that the Brontosaurus is back? “Maybe I’ll let them get in touch,” he says. | Marsh found another giant dinosaur in the same rock formation and named it Brontosaurus excelsus, the noble thunder lizard.
In the early 1900s, after discovering a fossil that was similar to both Brontosaurus and Apatosaurus, other researchers decided that the two dinosaurs were distinct species of the same genus. Subsequent studies only raised further questions about the status of Brontosaurus.
Palaeontologists eventually agreed that Brontosaurus is properly called Apatosaurus, under taxonomic rules drafted by the eighteenth-century Swedish systematist Carl Linnaeus and still in use today. The rules state that the first name given for an animal takes priority. The bones attributed to Brontosaurus excelsus, therefore, belonged to Apatosaurus excelsus.
The first known Brontosaurus fossil was unearthed in the Morrison formation in Colorado.
Credit: Davide Bonadonna
Drawing a family tree
Tschopp didn't set out to resurrect the Brontosaurus when he started analysing different specimens of diplodocid — the group to which Apatosaurus, Diplodocus and other giants belong. But he was interested in reviewing how the fossils had been classified and whether anatomical differences between specimens represented variation within species, or between species or genera. Tschopp and his colleagues analysed nearly 500 anatomical traits in dozens of specimens belonging to all of the 20 or so species of diplodocids to create a family tree. They spent five years amassing data, visiting 20 museums across Europe and the United States.
Very broadly, their tree confirmed established ideas about the evolutionary relationships among diplodocids. But the scientists also concluded that Apatosaurus and Brontosaurus were different enough to belong in their own genera. Many of the anatomical differences between the two dinosaurs are obscure, Tschopp says, but Apatosaurus’s stouter neck is an obvious one. “Even though both are very robust and massive animals, Apatosaurus is even more so,” he adds.
| no |
Paleozoology | Were the Brontosaurus and the Apatosaurus the same dinosaur? | no_statement | the "brontosaurus" and the apatosaurus were different "dinosaurs".. the "brontosaurus" and the apatosaurus are not the same species of "dinosaur". | https://ucmp.berkeley.edu/history/marsh.html | Othniel C. Marsh | The description of the magnificent collections which he assembled, and
which have been studied continuously ever since, is still far from complete,
forty years after his death, and he left an impress upon his chosen science
of Vertebrate Paleontology that will last as long as the bones he gathered
and pages he printed endure.
Charles Schuchert and Clara LeVene. . . 1940
Othniel Charles Marsh still retains a reputation as an "armchair
paleontologist," too busy to work in the field, who owed his high standing
not to genius, but to luck and to his family's money. It is true that
his contributions to geology were not of particularly high quality, and
that his paleontological work was sometimes slipshod. Whereas his great rival
Edward Drinker Cope went into the field throughout his career, Marsh
himself spent only four seasons in the field, between 1870 and 1873. It is
also true that the chair of paleontology that Marsh occupied at Yale was
endowed for him by his wealthy uncle, who further endowed the Peabody
Museum of Natural History where Marsh's collections remain to this day.
Marsh's ambitious, possessive, and sometimes unscrupulous and egotistical
nature also made him a rather difficult person to work with. Yet for all
that, his contributions to paleontology and evolution were formidable.
He remains one of the great figures in American paleontology.
Marsh is perhaps most famous as the rival and enemy of
Edward Drinker
Cope, America's other great vertebrate paleontologist of the period.
The two men started out as friends, collecting fossils together in the
eastern United States. Legend has it that the feud between the men began
when Marsh paid some of Cope's hired diggers to send fossils to him and
not to Cope. Matters became worse in 1870, when Cope published a description
of Elasmosaurus, a giant plesiosaur -- and Marsh gleefully pointed
out that Cope had accidentally placed the skull on the wrong end
of the beast. The
battle was on: for the next twenty years, the two men attacked and slandered
each other in print, while they and their crews raced to find and describe
the most and the finest new fossils. Each scientist hired field crews to
unearth and ship back fossils as fast as possible. The rival crews were
known to spy on each other, dynamite their own and each other's secret
localities (to keep their opponents from digging there), and occasionally
steal each other's fossils -- all the time exposed to harsh conditions
and danger from hostile Native Americans. "The Great Bone Wars," or
"The Great Bone Rush," will live long in paleontological folklore.
Meanwhile, the two scientists worked furiously to describe their fossils.
In their haste, they often based descriptions of new species on sparse
material, and sometimes mixed up bones from different animals, or gave
different names to the same animal. To give the most famous case: In 1877,
Marsh hastily described a new species of
sauropod
dinosaur, which he named Apatosaurus. This description was not based on
anything like a complete skeleton; all Marsh had at the time were
some vertebrae and
part of the pelvis. In 1879, he hastily named and described another
sauropod, Brontosaurus, also based on incomplete material. In
1883, after more of the skeleton had been unearthed,
he presented a full reconstruction of the skeleton of
Brontosaurus, which remains one of the most complete sauropod
skeletons known. Not until 1903 did paleontologist Elmer Riggs show that
the bones described as Brontosaurus and Apatosaurus belonged
to the same species of dinosaur. By the rules of scientific naming, the
first name given a species supersedes all others. And so, as any
six-year-old dinosaur enthusiast will tell you, Brontosaurus is no
longer a valid scientific name. As if that weren't enough, Marsh had
mistakenly given his skeleton of Brontosaurus the skull of a third
sauropod, Camarasaurus -- an error that many paleontologists
suspected, but that wasn't conclusively shown to be wrong until the 1970s.
Despite such shenanigans, the feud between Marsh and Cope benefitted
paleontology immensely. When Marsh and Cope began to work, only eighteen
dinosaur species were known from North America -- many only known from
isolated teeth or vertebrae. Between them, the two men described over
130 species of dinosaurs. It was Marsh who described such famous dinosaurs
as Stegosaurus and Triceratops. Both also made great
discoveries of fossil mammals and other vertebrates. Although not the
first paleontologists to work in the "Wild West," Marsh and Cope opened
up the immense troves of fossils to be found in the western United States.
Who won the "Bone Wars"? The real winners were the museums that ended up
housing the two men's enormous collections -- Marsh's at the Peabody Museum
and the Smithsonian Institution; Cope's at the Academy of Natural Sciences
in Philadelphia. These have remained a rich source of data for generations
of paleontologists.
Where Cope was a neo-Lamarckian -- a believer in the inheritance of acquired
traits -- Marsh was one of the first American converts to Darwin's theory
of evolution. As it turned out, he also gathered an immense amount of
data to support it. Darwin's book Origin of Species was published
in 1859, during Marsh's senior year at Yale. In 1862 and 1865, Marsh traveled
to England, where he met scientists such as Charles Lyell,
Thomas Henry
Huxley, and Charles Darwin himself. Marsh later wrote of Huxley as
a "guide, philosopher, and friend, almost from the time I made the choice
of science as my life work."
Marsh's enormous collection of fossils enabled him to fill in a number of the
gaps in the fossil record that were troublesome for supporters of Darwinian
evolution. His descriptions in the 1870s of Cretaceous toothed birds such as
Ichthyornis and Hesperornis, coming right on the heels of the
discovery of Archaeopteryx, filled in a major gap in the early
history of birds. In 1877, Marsh proposed the theory that birds
were descended from dinosaurs, following Thomas Henry Huxley. Later,
in 1881, Marsh suggested a close affinity between birds and coelurosaurs
(small carnivorous dinosaurs):
In some of these [dinosaurs], the separate bones of the skeleton cannot be
distinguished with certainty from those of Jurassic birds. . . Some of
these diminutive Dinosaurs were perhaps arboreal in habit, and the differences
between them and the birds that lived with them may have been at first
mainly one of feathers. . . | This description was not based on
anything like a complete skeleton; all Marsh had at the time were
some vertebrae and
part of the pelvis. In 1879, he hastily named and described another
sauropod, Brontosaurus, also based on incomplete material. In
1883, after more of the skeleton had been unearthed,
he presented a full reconstruction of the skeleton of
Brontosaurus, which remains one of the most complete sauropod
skeletons known. Not until 1903 did paleontologist Elmer Riggs show that
the bones described as Brontosaurus and Apatosaurus belonged
to the same species of dinosaur. By the rules of scientific naming, the
first name given a species supersedes all others. And so, as any
six-year-old dinosaur enthusiast will tell you, Brontosaurus is no
longer a valid scientific name. As if that weren't enough, Marsh had
mistakenly given his skeleton of Brontosaurus the skull of a third
sauropod, Camarasaurus -- an error that many paleontologists
suspected, but that wasn't conclusively shown to be wrong until the 1970s.
Despite such shenanigans, the feud between Marsh and Cope benefitted
paleontology immensely. When Marsh and Cope began to work, only eighteen
dinosaur species were known from North America -- many only known from
isolated teeth or vertebrae. Between them, the two men described over
130 species of dinosaurs. It was Marsh who described such famous dinosaurs
as Stegosaurus and Triceratops. Both also made great
discoveries of fossil mammals and other vertebrates. Although not the
first paleontologists to work in the "Wild West," Marsh and Cope opened
up the immense troves of fossils to be found in the western United States.
Who won the "Bone Wars"? The real winners were the museums that ended up
housing the two men's enormous collections -- | yes |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://en.wikipedia.org/wiki/Dark_Ages_(historiography) | Dark Ages (historiography) - Wikipedia | The concept of a "Dark Age" as a historiographical periodization originated in the 1330s with the Italian scholar Petrarch, who regarded the post-Roman centuries as "dark" compared to the "light" of classical antiquity.[1][2] The term employs traditional light-versus-darkness imagery to contrast the era's "darkness" (ignorance and error) with earlier and later periods of 'light' (knowledge and understanding).[1] The phrase Dark Age(s) itself derives from the Latin saeculum obscurum, originally applied by Caesar Baronius in 1602 when he referred to a tumultuous period in the 10th and 11th centuries.[3][4] The concept thus came to characterize the entire Middle Ages as a time of intellectual darkness in Europe between the fall of Rome and the Renaissance that became especially popular during the 18th-century Age of Enlightenment.[1] Others, however, have used the term to denote the relative ignorance of historians regarding at least the early part of the Middle Ages, from a scarcity of records.
As the accomplishments of the era came to be better understood in the 19th and the 20th centuries, scholars began restricting the Dark Ages appellation to the Early Middle Ages (c. 5th–10th century),[1][5][6] and today's scholars also reject its usage for the period.[7] The majority of modern scholars avoid the term altogether due to its negative connotations, finding it misleading and inaccurate.[8][9][10][11] Despite this, Petrarch's pejorative meaning remains in use,[12][13][14] particularly in popular culture, which often simplistically views the Middle Ages as a time of violence and backwardness.[15][16]
The idea of a Dark Age originated with the Tuscan scholar Petrarch in the 1330s.[14][17] Writing of the past, he said: "Amidst the errors there shone forth men of genius; no less keen were their eyes, although they were surrounded by darkness and dense gloom".[18] Christian writers, including Petrarch himself,[17] had long used traditional metaphors of 'light versus darkness' to describe 'good versus evil'. Petrarch was the first to give the metaphor secular meaning by reversing its application. He now saw classical antiquity, so long considered a 'dark' age for its lack of Christianity, in the 'light' of its cultural achievements, while Petrarch's own time, allegedly lacking such cultural achievements, was seen as the age of darkness.[17]
From his perspective on the Italian peninsula, Petrarch saw the Roman period and classical antiquity as an expression of greatness.[17] He spent much of his time traveling through Europe, rediscovering and republishing classic Latin and Greek texts. He wanted to restore the Latin language to its former purity. Renaissance humanists saw the preceding 900 years as a time of stagnation, with history unfolding not along the religious outline of Saint Augustine's Six Ages of the World, but in cultural (or secular) terms through progressive development of classical ideals, literature, and art.
Petrarch wrote that history had two periods: the classic period of Greeks and Romans, followed by a time of darkness in which he saw himself living. In around 1343, in the conclusion of his epic Africa, he wrote: "My fate is to live among varied and confusing storms. But for you perhaps, if as I hope and wish you will live long after me, there will follow a better age. This sleep of forgetfulness will not last forever. When the darkness has been dispersed, our descendants can come again in the former pure radiance."[19] In the 15th century, historians Leonardo Bruni and Flavio Biondo developed a three-tier outline of history. They used Petrarch's two ages, plus a modern, 'better age', which they believed the world had entered. Later, the term 'Middle Ages' – Latin media tempestas (1469) or medium aevum (1604), was used to describe the period of supposed decline.[20]
During the Reformations of the 16th and 17th centuries, Protestants generally had a similar view to Renaissance humanists such as Petrarch, but also added an anti-Catholic perspective. They saw classical antiquity as a golden time not only because of its Latin literature but also because it witnessed the beginnings of Christianity. They promoted the idea that the 'Middle Age' was a time of darkness also because of corruption within the Catholic Church, such as popes ruling as kings, veneration of saints' relics, a licentious priesthood and institutionalized moral hypocrisy.[21]
"The new age (saeculum) that was beginning, for its harshness and barrenness of good could well be called iron, for its baseness and abounding evil leaden, and moreover for its lack of writers (inopia scriptorum) dark (obscurum)".[27]
Significantly, Baronius termed the age 'dark' because of the paucity of written records. The "lack of writers" he referred to may be illustrated by comparing the number of volumes in Migne's Patrologia Latina containing the work of Latin writers from the 10th century (the heart of the age he called 'dark') with the number containing the work of writers from the preceding and succeeding centuries. A minority of these writers were historians.
Medieval production of manuscripts.[28] The beginning of the Middle Ages was also a period of low activity in copying. This graph does not include the Byzantine Empire.
There is a sharp drop from 34 volumes in the 9th century to just 8 in the 10th. The 11th century, with 13, evidences a certain recovery, and the 12th century, with 40, surpasses the 9th, something that the 13th, with just 26, fails to do. There was indeed a 'dark age', in Baronius's sense of a "lack of writers", between the Carolingian Renaissance in the 9th century and the beginnings, sometime in the 11th, of what has been called the Renaissance of the 12th century. Furthermore, there was an earlier period of "lack of writers" during the 7th and 8th centuries. Therefore, in Western Europe, two 'dark ages' can be identified, separated by the brilliant but brief Carolingian Renaissance.
Baronius' 'dark age' seems to have struck historians, for it was in the 17th century that the term started to spread to various European languages, with his original Latin term saeculum obscurum being reserved for the period to which he had applied it. Some, following Baronius, used 'dark age' neutrally to refer to a dearth of written records, but others used it pejoratively and lapsed into that lack of objectivity that has discredited the term for many modern historians.
The first British historian to use the term was most likely Gilbert Burnet, in the form 'darker ages' which appears several times in his work during the later 17th century. The earliest reference seems to be in the "Epistle Dedicatory" to Volume I of The History of the Reformation of the Church of England of 1679, where he writes: "The design of the reformation was to restore Christianity to what it was at first, and to purge it of those corruptions, with which it was overrun in the later and darker ages."[29] He uses it again in the 1682 Volume II, where he dismisses the story of "St George's fighting with the dragon" as "a legend formed in the darker ages to support the humour of chivalry".[30] Burnet was a bishop chronicling how England became Protestant, and his use of the term is invariably pejorative.
Consequently, an evolution had occurred in at least three ways. Petrarch's original metaphor of light versus dark has expanded over time, implicitly at least. Even if later humanists no longer saw themselves living in a dark age, their times were still not light enough for 18th-century writers who saw themselves as living in the real Age of Enlightenment, while the period to be condemned stretched to include what we now call Early Modern times. Additionally, Petrarch's metaphor of darkness, which he used mainly to deplore what he saw as a lack of secular achievement, was sharpened to take on a more explicitly anti-religious and anti-clerical meaning.
In the late 18th and the early 19th centuries, the Romantics reversed the negative assessment of Enlightenment critics with a vogue for medievalism.[33] The word "Gothic" had been a term of opprobrium akin to "Vandal" until a few self-confident mid-18th-century English "Goths" like Horace Walpole initiated the Gothic Revival in the arts. This stimulated interest in the Middle Ages, which for the following generation began to take on the idyllic image of an "Age of Faith". This, reacting to a world dominated by Enlightenment rationalism, expressed a romantic view of a Golden Age of chivalry. The Middle Ages were seen with nostalgia as a period of social and environmental harmony and spiritual inspiration, in contrast to the excesses of the French Revolution and, most of all, to the environmental and social upheavals and utilitarianism of the developing Industrial Revolution.[34] The Romantics' view is still represented in modern-day fairs and festivals celebrating the period with 'merrie' costumes and events.
Just as Petrarch had twisted the meaning of light and darkness, the Romantics had twisted the judgment of the Enlightenment. However, the period that they idealized was largely the High Middle Ages, extending into Early Modern times. In one respect, that negated the religious aspect of Petrarch's judgment, since these later centuries were those when the power and prestige of the Church were at their height. To many, the scope of the Dark Ages was becoming divorced from this period, denoting mainly the centuries immediately following the fall of Rome.
The term was widely used by 19th-century historians. In 1860, in The Civilization of the Renaissance in Italy, Jacob Burckhardt delineated the contrast between the medieval 'dark ages' and the more enlightened Renaissance, which had revived the cultural and intellectual achievements of antiquity.[35] The earliest entry for a capitalized "Dark Ages" in the Oxford English Dictionary (OED) is a reference in Henry Thomas Buckle's History of Civilization in England in 1857, who wrote: "During these, which are rightly called the Dark Ages, the clergy were supreme." The OED in 1894 defined an uncapitalised "dark ages" as "a term sometimes applied to the period of the Middle Ages to mark the intellectual darkness characteristic of the time".[36]
However, the early 20th century saw a radical re-evaluation of the Middle Ages, which called into question the terminology of darkness,[10] or at least its more pejorative use. In 1977, the historian Denys Hay spoke ironically of "the lively centuries which we call dark".[37] More forcefully, a book about the history of German literature published in 2007 describes "the dark ages" as "a popular if uninformed manner of speaking".[38]
Most modern historians do not use the term "dark ages" and prefer terms such as Early Middle Ages. However, when used by some historians today, the term "Dark Ages" is meant to describe the economic, political and cultural problems of the era.[39][40] For others, the term Dark Ages is intended to be neutral, expressing the idea that the events of the period seem 'dark' to us because of the paucity of the historical record.[10] For example, Robert Sallares, commenting on the lack of sources to establish whether the plague pandemic of 541 to 750 reached Northern Europe, opines that "the epithet Dark Ages is surely still an appropriate description of this period".[41] The term is also used in this sense (often in the singular) to reference the Bronze Age collapse and the subsequent Greek Dark Ages,[12] the brief Parthian Dark Age (1st century BC),[42] the dark ages of Cambodia (c. 1450–1863 AD), and also a hypothetical Digital Dark Age which would ensue if the electronic documents produced in the current period were to become unreadable at some point in the future.[43] Some Byzantinists have used the term Byzantine Dark Ages to refer to the period from the earliest Muslim conquests to about 800,[44] because there are no extant historical texts in Greek from the period, and thus the history of the Byzantine Empire and its territories that were conquered by the Muslims is poorly understood and must be reconstructed from other contemporaneous sources, such as religious texts.[45][46] The term "dark age" is not restricted to the discipline of history. Since the archaeological evidence for some periods is abundant and for others scanty, there are also archaeological dark ages.[47]
Since the Late Middle Ages significantly overlap with the Renaissance, the term 'Dark Ages' became restricted to distinct times and places in medieval Europe. Thus the 5th and 6th centuries in Britain, at the height of the Saxon invasions, have been called "the darkest of the Dark Ages",[48] in view of the societal collapse of the period and the consequent lack of historical records. Further south and east, the same was true in the former Roman province of Dacia, where history after the Roman withdrawal went unrecorded for centuries as Slavs, Avars, Bulgars, and others struggled for supremacy in the Danube basin, and events there are still disputed. However, at this time the Abbasid Caliphate is often considered to have experienced its Golden Age rather than Dark Age; consequently, usage of the term must also specify a geography. While Petrarch's concept of a Dark Age corresponded to a mostly Christian period following pre-Christian Rome, today the term mainly applies to the cultures and periods in Europe that were least Christianized, and thus most sparsely covered by chronicles and other contemporary sources, at the time mostly written by Catholic clergy.[citation needed]
However, from the later 20th century onward, other historians became critical even of this nonjudgmental use of the term for two main reasons.[10] Firstly, it is questionable whether it is ever possible to use the term in a neutral way: scholars may intend it, but ordinary readers may not understand it so. Secondly, 20th-century scholarship had increased understanding of the history and culture of the period,[49] to such an extent that it is no longer really 'dark' to us.[10] To avoid the value judgment implied by the expression, many historians now avoid it altogether.[50][51] It was occasionally used up to the 1990s by historians of early medieval Britain, for example in the title of the 1991 book by Ann Williams, Alfred Smyth and D. P. Kirby, A Biographical Dictionary of Dark Age Britain, England, Scotland and Wales, c.500–c.1050,[52] and in the comment by Richard Abels in 1998 that the greatness of Alfred the Great "was the greatness of a Dark Age king".[53] In 2020, John Blair, Stephen Rippon and Christopher Smart observed that: "The days when archaeologists and historians referred to the fifth to the tenth centuries as the 'Dark Ages' are long gone, and the material culture produced during that period demonstrates a high degree of sophistication."[54]
A 2021 lecture by Howard Williams of Chester University explored how "stereotypes and popular perceptions of the Early Middle Ages – popularly still considered the European 'Dark Ages' – plague popular culture";[55] and finding 'Dark Ages' is "rife outside of academic literature, including in newspaper articles and media debates."[56] As to why it is used, according to Williams, legends and racial misunderstandings have been revitalized by modern nationalists, colonialists and imperialists around present-day concepts of identity, faith and origin myths i.e. appropriating historical myths for modern political ends.[56]
In a book about medievalisms in popular culture by Andrew B. R. Elliott (2017), he found "by far" the most common use of 'Dark Ages' is to "signify a general sense of backwardness or lack of technological sophistication", in particular noting how it has become entrenched in daily and political discourse.[57] Reasons for use, according to Elliott, are often "banal medievalisms", which are "characterized mainly by being unconscious, unwitting and by having little or no intention to refer to the Middle Ages"; for example, referring to an insurance industry that still relied on paper instead of computers as being in the 'Dark Ages'.[58] These banal uses are little more than tropes that inherently contain a criticism about lack of progress.[57] Elliott connects 'Dark Ages' to the "Myth of Progress", also observed by Joseph Tainter, who says, "There is genuine bias against so-called 'Dark Ages'" because of a modern belief that society normally traverses from lesser to greater complexity, and when complexity is reduced during a collapse, this is perceived as out of the ordinary and thus undesirable; he counters that complexity is rare in human history, a costly mode of organization that must be constantly maintained, and periods of less complexity are common and to be expected as part of the overall progression towards greater complexity.[15]
In Peter S. Wells's 2008 book, Barbarians to Angels: The Dark Ages Reconsidered, he writes, "I have tried to show that far from being a period of cultural bleakness and unmitigated violence, the centuries (5th - 9th) known popularly as the Dark Ages were a time of dynamic development, cultural creativity, and long-distance networking".[59] He writes that our "popular understanding" of these centuries "depends largely on the picture of barbarian invaders that Edward Gibbon presented more than two hundred years ago," and that this view has been accepted "by many who have read and admire Gibbon's work."[60]
David C. Lindberg, a science and religion historian, says the 'Dark Ages' are "according to wide-spread popular belief" portrayed as "a time of ignorance, barbarism and superstition", for which he asserts "blame is most often laid at the feet of the Christian church".[61] Medieval historian Matthew Gabriele echoes this view as a myth of popular culture.[62] Andrew B. R. Elliott notes the extent to which "Middle Ages/Dark Ages have come to be synonymous with religious persecution, witch hunts and scientific ignorance".[63]
^Thompson, Bard (1996). Humanists and Reformers: A History of the Renaissance and Reformation. Grand Rapids, MI: Erdmans. p. 13. ISBN978-0-8028-6348-5. Petrarch was the very first to speak of the Middle Ages as a 'dark age', one that separated him from the riches and pleasures of classical antiquity and that broke the connection between his own age and the civilization of the Greeks and the Romans.
^Ker, W. P. (1904). The Dark Ages. New York: C. Scribner's Sons. p. 1. The Dark Ages and the Middle Ages — or the Middle Age — used to be the same; two names for the same period. But they have come to be distinguished, and the Dark Ages are now no more than the first part of the Middle Age, while the term mediaeval is often restricted to the later centuries, about 1100 to 1500, the age of chivalry, the time between the first Crusade and the Renaissance. This was not the old view, and it does not agree with the proper meaning of the name.
^Halsall, Guy (2005). Fouracre, Paul (ed.). The New Cambridge Medieval History: c.500-c.700. Vol. 1. Cambridge University Press. p. 90. In terms of the sources of information available, this is most certainly not a Dark Age.... Over the last century, the sources of evidence have increased dramatically, and the remit of the historian (broadly defined as a student of the past) has expanded correspondingly.
^Snyder, Christopher A. (1998). An Age of Tyrants: Britain and the Britons A.D. 400–600. University Park: Pennsylvania State University Press. pp. xiii–xiv. ISBN0-271-01780-5.. In explaining his approach to writing the work, Snyder refers to the "so-called Dark Ages" and notes, "Historians and archaeologists have never liked the label Dark Ages... there are numerous indicators that these centuries were neither 'dark' nor 'barbarous' in comparison with other eras."
^Raico, Ralph (30 November 2006). "The European Miracle". Archived from the original on 3 September 2011. Retrieved 14 August 2011. "The stereotype of the Middle Ages as 'the Dark Ages' fostered by Renaissance humanists and Enlightenment philosophes has, of course, long since been abandoned by scholars."
^Petrarch (1367). Apologia cuiusdam anonymi Galli calumnias (Defence against the calumnies of an anonymous Frenchman), in Petrarch, Opera Omnia, Basel, 1554, p. 1195. This quotation comes from the English translation of Mommsen's article, where the source is given in a footnote. Cf. also Marsh, D, ed., (2003), Invectives, Harvard University Press, p. 457.
^Daileader, Philip (2001). The High Middle Ages. The Teaching Company. ISBN1-56585-827-1. "Catholics living during the Protestant Reformation were not going to take this assault lying down. They, too, turned to the study of the Middle Ages, going back to prove that, far from being a period of religious corruption, the Middle Ages were superior to the era of the Protestant Reformation, because the Middle Ages were free of the religious schisms and religious wars that were plaguing the 16th and 17th centuries."
^Baronius's actual starting-point for the "dark age" was 900 (annus Redemptoris nongentesimus), but that was an arbitrary rounding off that was due mainly to his strictly annalistic approach. Later historians,m such as Marco Porri in his Catholic History of the Church(Storia della Chiesa)Archived 2011-07-16 at the Wayback Machine and the Lutheran Christian Cyclopedia("Saeculum Obscurum")Archived 2009-10-19 at the Wayback Machine, have tended to amend it to the more historically significant date of 888 and often rounded it down further to 880. The first weeks of 888 witnessed both the final break-up of the Carolingian Empire and the death of its deposed ruler Charles the Fat. Unlike the end of the Carolingian Empire, however, the end of the Carolingian Renaissance cannot be precisely dated, and it was the latter development that was responsible for the "lack of writers" that Baronius, as a historian, found so irksome.
^Burnet, Gilbert (1679). The History of the Reformation of the Church of England, Vol. I. Oxford, 1929, p. ii.
^Burnet, Gilbert (1682). The History of the Reformation of the Church of England, Vol. II. Oxford, 1829, p. 423. Burnet also uses the term in 1682 in The Abridgement of the History of the Reformation of the Church of England (2nd Edition, London, 1683, p. 52) and in 1687 in Travels through France, Italy, Germany and Switzerland (London, 1750, p. 257). The Oxford English Dictionary erroneously cites the last of these as the earliest recorded use of the term in English.
^Bartlett, Robert (2001). "Introduction: Perspectives on the Medieval World", in Medieval Panorama. ISBN0-89236-642-7. "Disdain about the medieval past was especially forthright amongst the critical and rationalist thinkers of the Enlightenment. For them the Middle Ages epitomized the barbaric, priest-ridden world they were attempting to transform."
^Gibbon, Edward (1788). The History of the Decline and Fall of the Roman Empire, Vol. 6, Ch. XXXVII, paragraph 619.
^Alexander, Michael (2007). Medievalism: The Middle Ages in Modern England. Yale University Press.
^Chandler, Alice K. (1971). A Dream of Order: The Medieval Ideal in Nineteenth-Century English Literature. University of Nebraska Press, p. 4.
^Buckle, History of Civilization in England, I, ix, p. 558, quoted in Oxford English Dictionary, D-Deceit (1894), p. 34. The 1989 second edition of the OED retains the 1894 definition and adds "often restricted to the early period of the Middle Ages, between the time of the fall of Rome and the appearance of vernacular written documents".
^Dunphy, Graeme (2007). "Literary Transitions, 1300–1500: From Late Mediaeval to Early Modern" in: The Camden House History of German Literature vol IV: "Early Modern German Literature". The chapter opens: "A popular if uninformed manner of speaking refers to the medieval period as "the dark ages." If there is a dark age in the literary history of Germany, however, it is the one that follows: the fourteenth and early fifteenth centuries, the time between the Middle High German Blütezeit and the full blossoming of the Renaissance. It may be called a dark age, not because literary production waned in these decades, but because nineteenth-century aesthetics and twentieth-century university curricula allowed the achievements of that time to fade into obscurity."
^Sallares, Robert (2007). "Ecology, Evolution and Epidemiology of Plague". In Little, Lester (ed.). Plague and the End of Antiquity. Cambridge, UK: Cambridge University Press. p. 257. ISBN978-0-521-84639-4.
^Cannon, John and Griffiths, Ralph (2000). The Oxford Illustrated History of the British Monarchy (Oxford Illustrated Histories), 2nd Revised edition. Oxford, England: Oxford University Press, p. 1. The first chapter opens with the sentence: "In the darkest of the Dark Ages, the fifth and sixth centuries, there were many kings in Britain but no kingdoms."
^Encyclopædia BritannicaArchived 2015-05-04 at the Wayback Machine "It is now rarely used by historians because of the value judgment it implies. Though sometimes taken to derive its meaning from the fact that little was then known about the period, the term's more usual and pejorative sense is of a period of intellectual darkness and barbarity."
^Kyle Harper (2017). The Fate of Rome: Climate, Disease, and the End of an Empire (The Princeton History of the Ancient World). Princeton University Press. p. 12. These used to be called the Dark Ages. That label is best set aside. It is hopelessly redolent of Renaissance and Enlightenment prejudices. It altogether underestimates the impressive cultural vitality and enduring spiritual legacy of the entire period that has come to be known as "late antiquity". At the same time we do not have to euphemize the realities of imperial disintegration, economic collapse and societal disintegration.
^David C. Lindberg (2003). "The Medieval Church Encounters the Classical Tradition: Saint Augustine, Roger Bacon, and the Handmaiden Metaphor". In David C. Lindberg; Ronald L. Numbers (eds.). When Science & Christianity Meet. Chicago: University of Chicago Press. p. 7. ISBN9780226482156. According to widespread popular belief, the period of European history known as the Middle Ages was a time of barbarism, ignorance and superstitious. The epithet 'Dark Ages' often applied to it nicely captures this opinion. As for the ills that threatened literacy, learning, and especially science during the Middle Ages, blame is most often laid at the feet of the Christian church... | "[60]
David C. Lindberg, a science and religion historian, says the 'Dark Ages' are "according to wide-spread popular belief" portrayed as "a time of ignorance, barbarism and superstition", for which he asserts "blame is most often laid at the feet of the Christian church".[61] Medieval historian Matthew Gabriele echoes this view as a myth of popular culture.[62] Andrew B. R. Elliott notes the extent to which "Middle Ages/Dark Ages have come to be synonymous with religious persecution, witch hunts and scientific ignorance".[63]
^Thompson, Bard (1996). Humanists and Reformers: A History of the Renaissance and Reformation. Grand Rapids, MI: Erdmans. p. 13. ISBN978-0-8028-6348-5. Petrarch was the very first to speak of the Middle Ages as a 'dark age', one that separated him from the riches and pleasures of classical antiquity and that broke the connection between his own age and the civilization of the Greeks and the Romans.
^Ker, W. P. (1904). The Dark Ages. New York: C. Scribner's Sons. p. 1. The Dark Ages and the Middle Ages — or the Middle Age — used to be the same; two names for the same period. But they have come to be distinguished, and the Dark Ages are now no more than the first part of the Middle Age, while the term mediaeval is often restricted to the later centuries, about 1100 to 1500, the age of chivalry, the time between the first Crusade and the Renaissance. This was not the old view, and it does not agree with the proper meaning of the name.
^Halsall, Guy (2005). Fouracre, Paul (ed.). The New Cambridge Medieval History: c.500-c.700. Vol. 1. Cambridge University Press. | yes |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://www.history.com/news/6-reasons-the-dark-ages-werent-so-dark | 6 Reasons the Dark Ages Weren't So Dark | HISTORY | 6 Reasons the Dark Ages Weren’t So Dark
1. The idea of the “Dark Ages” came from later scholars who were heavily biased toward ancient Rome.
In the years following 476 A.D., various Germanic peoples conquered the former Roman Empire in the West (including Europe and North Africa), shoving aside ancient Roman traditions in favor of their own. The negative view of the so-called “Dark Ages” became popular largely because most of the written records of the time (including St. Jerome and St. Patrick in the fifth century, Gregory of Tours in the sixth and Bede in the eighth) had a strong Rome-centric bias.
While it’s true that such innovations as Roman concrete were lost, and the literacy rate was not as high in the Early Middle Ages as in ancient Rome, the idea of the so-called “Dark Ages” came from Renaissance scholars like Petrarch, who viewed ancient Greece and Rome as the pinnacle of human achievement. Accordingly, they dismissed the era that followed as a dark and chaotic time in which no great leaders emerged, no scientific accomplishments were made and no great art was produced.
2. The Church replaced the Roman Empire as the most powerful force in Europe, redefining the relationship between church and state.
In Rome’s absence, Europe in the Early Middle Ages lacked a large kingdom or other political structure as a single centralizing force, apart from a brief period during the reign of the Frankish Emperor Charlemagne (more on that later). Instead, the medieval Church grew into the most powerful institution in Europe, thanks in no small part to the rise of monasticism, a movement that began in the third century with St. Anthony of Egypt and would rise to its most influential point in the High Middle Ages (1000-1300 A.D.).
Kings, queens and other rulers during the early medieval period drew much of their authority and power from their relationship with the Church. The rise of a strong papacy, beginning with Gregory the Great (pope from 590 to 604), meant that European monarchs could not monopolize power, unlike in the days of the Roman Empire. This idea of limits on royal power would continue into the High Middle Ages, influencing such milestones as the Magna Carta and the birth of the English Parliament.
3. The growth of monasticism had important implications for later Western values and attitudes.
The dominance of the Church during the Early Middle Ages was a major reason later scholars—specifically those of the Protestant Reformation in the 16th century and the Enlightenment in the 17th and 18th centuries—branded the period as “unenlightened” (otherwise known as dark), believing the clergy repressed intellectual progress in favor of religious piety. But early Christian monasteries encouraged literacy and learning, and many medieval monks were both patrons of the arts and artists themselves.
One particularly influential monk of the Early Middle Ages was Benedict of Nursia (480-543), who founded the great monastery of Montecassino. His Benedictine Rule—a kind of written constitution laying out standards for the monastery and congregation and limiting the abbot’s authority according to these standards—spread across Europe, eventually becoming the model for most Western monasteries. Finally, Benedict’s insistence that “Idleness is the enemy of the soul” and his rule that monks should do manual as well as intellectual and spiritual labor anticipated the famous Protestant work ethic by centuries.
4. The Early Middle Ages were boom times for agriculture.
Before the Early Middle Ages, Europe’s agricultural prosperity was largely limited to the south, where sandy, dry and loose soil was well suited to the earliest functioning plough, known as the scratch plough. But the invention of the heavy plough, which could turn over the much more fertile clay soil deep in the earth, would galvanize the agriculture of northern Europe by the 10th century. Another key innovation of the period was the horse collar, which was placed around a horse’s neck and shoulders to distribute weight and protect the animal when pulling a wagon or plough. Horses proved to be much more powerful and effective than oxen, and the horse collar would revolutionize both agriculture and transportation. The use of metal horseshoes had become common practice by 1000 A.D. as well.
Scientists also believe something called the Medieval Warm Period took place from 900 to 1300, during which the world experienced relatively warm conditions. This held particularly true for the Northern Hemisphere, extending from Greenland eastward through Europe. Combined with key advances in farming technology, uncommonly good weather appears to have fueled the agricultural boom of the period.
5. Great advances were made in science and math—in the Islamic world.
Among the more popular myths about the “Dark Ages” is the idea that the medieval Christian church suppressed natural scientists, prohibiting procedures such as autopsies and dissections and basically halting all scientific progress. Historical evidence doesn’t support this idea: Progress may have been slower in Western Europe during the Early Middle Ages, but it was steady, and it laid the foundations for future advances in the later medieval period.
At the same time, the Islamic world leaped ahead in mathematics and the sciences, building on a foundation of Greek and other ancient texts translated into Arabic. The Latin translation of “The Compendious Book on Calculation by Completion and Balancing,” by the ninth-century Persian astronomer and mathematician al-Khwarizmi (c. 780-c. 850), would introduce Europe to algebra, including the first systematic solution of linear and quadratic equations; the Latinized version of al-Khwarizmi’s name gave us the word “algorithm.”
6. The Carolingian Renaissance saw a flowering in the arts, literature, architecture and other cultural realms.
Karl, a son of Pepin the Short, inherited the Frankish kingdom with his brother Carloman when Pepin died in 768. Carloman died several years later, and 29-year-old Karl assumed complete control, beginning his historic reign as Charlemagne (or Charles the Great). Over some 50 military campaigns, his forces fought Muslims in Spain, Bavarians and Saxons in northern Germany and Lombards in Italy, expanding the Frankish empire exponentially. As representative of the first Germanic tribe to practice Catholicism, Charlemagne took seriously his duty to spread the faith. In 800, Pope Leo III crowned Charlemagne “emperor of the Romans,” which eventually evolved into the title of Holy Roman Emperor.
Charlemagne worked to uphold this lofty distinction, building a strong centralized state, fostering a rebirth of Roman-style architecture, promoting educational reform and ensuring the preservation of classic Latin texts. A key advancement of Charlemagne’s rule was the introduction of a standard handwriting script, known as Carolingian miniscule. With innovations like punctuation, cases and spacing between words, it revolutionized reading and writing and facilitated the production of books and other documents. Though the Carolingian dynasty had dissolved by the end of the ninth century (Charlemagne himself died in 814), his legacy would provide the foundations—including books, schools, curricula and teaching techniques—for the Renaissance and other later cultural revivals.
Sarah Pruitt is a writer and editor based in seacoast New Hampshire. She has been a frequent contributor to History.com since 2005, and is the author of Breaking History: Vanished! (Lyons Press, 2017),which chronicles some of history's most famous disappearances.
Fact Check
We strive for accuracy and fairness. But if you see something that doesn't look right, click here to contact us! HISTORY reviews and updates its content regularly to ensure it is complete and accurate. | 6 Reasons the Dark Ages Weren’t So Dark
1. The idea of the “Dark Ages” came from later scholars who were heavily biased toward ancient Rome.
In the years following 476 A.D., various Germanic peoples conquered the former Roman Empire in the West (including Europe and North Africa), shoving aside ancient Roman traditions in favor of their own. The negative view of the so-called “Dark Ages” became popular largely because most of the written records of the time (including St. Jerome and St. Patrick in the fifth century, Gregory of Tours in the sixth and Bede in the eighth) had a strong Rome-centric bias.
While it’s true that such innovations as Roman concrete were lost, and the literacy rate was not as high in the Early Middle Ages as in ancient Rome, the idea of the so-called “Dark Ages” came from Renaissance scholars like Petrarch, who viewed ancient Greece and Rome as the pinnacle of human achievement. Accordingly, they dismissed the era that followed as a dark and chaotic time in which no great leaders emerged, no scientific accomplishments were made and no great art was produced.
2. The Church replaced the Roman Empire as the most powerful force in Europe, redefining the relationship between church and state.
In Rome’s absence, Europe in the Early Middle Ages lacked a large kingdom or other political structure as a single centralizing force, apart from a brief period during the reign of the Frankish Emperor Charlemagne (more on that later). Instead, the medieval Church grew into the most powerful institution in Europe, thanks in no small part to the rise of monasticism, a movement that began in the third century with St. Anthony of Egypt and would rise to its most influential point in the High Middle Ages (1000-1300 A.D.).
Kings, queens and other rulers during the early medieval period drew much of their authority and power from their relationship with the Church. The rise of a strong papacy, beginning with Gregory the Great (pope from 590 to 604), meant that European monarchs could not monopolize power, unlike in the days of the Roman Empire. | no |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://study.com/learn/lesson/the-dark-ages.html | The Dark Ages | Definition, Causes & History - Video & Lesson ... | The Dark Ages: Causes & History
Rachel Becker is a freelance writer in the Pacific NW of the USA. She has a background of twelve years as an elementary educator working in public schools. She earned both her Bachelor of Arts and Letters and Masters in Education degrees at Portland State University. She holds a teaching certificate for the state of Oregon and is endorsed for multiple subjects Grades K-8. Twelve years of teaching grades K-5 in Title 1 schools provided her with ample experience in meeting diverse student needs and building community for connection with all students.
Table of Contents
War marked the period of the Dark Ages with hand to hand combat in the Crusades.
The Dark Ages
The fall of the Roman Empire ushered in a time of great change throughout Europe, and with it, what is referred to by some as the Dark Ages, a five-hundred-year time period from roughly 500 CE to 1000 CE. Most academics do not refer to this time as darker, as it is difficult for historians to fully explain the span of time between the rise and fall of empires that ruled Europe. Prior to the beginning of the Dark Ages, Romans ruled European lands. Their contributions to European development were most notable in the areas of science, philosophy, government, and architecture. The fall of Rome and the ensuing disbursement of people throughout Western Europe negatively impacted the process of keeping historical documentation, making it difficult for historians to maintain accurate assessments of the period.
The term, the Middle Ages, refers to a period of time that also began after the fall of the Roman Empire and includes the time period associated with the Dark Ages. In 180 CE, Emperor Marcus Aurelius died, initiating the long process of the Roman Empire collapsing over the next four hundred years. The Middle Ages began in approximately 500 CE and ended in 1500 CE. So, the first five hundred years of the Middle Ages are characterized by the Dark Ages.
Europe had already experienced similar circumstances with the Greek Dark Ages, which began in 1100 BCE, with the end of the Mycenaean era, and ended around 700 BCE. For reference, the Mycenaean were the first Greeks who formulated the Greek language, which evolved into a common language and a system for writing. Greek civilization created many things still referenced today, including Greek mythology, legend, and story-telling shared through oratory.
An error occurred trying to load this video.
Try refreshing the page, or contact customer support.
You must cCreate an account to continue watching
Register to view this lesson
Are you a student or a teacher?
Create Your Account To Continue Watching
As a member, you'll also get unlimited access to over 88,000
lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you
succeed.
When Were the Dark Ages?
The Dark Ages in Europe transpired from 500-1000 CE, beginning with the end of the Roman Empire. Many consider this time period dark due to a lack of cultural advancement in society. This was an assumption made due to the minimal historical documentation produced during the time period.
The Middle Ages
The Middle Ages time period includes the time period associated with the Dark Ages, plus an additional five hundred years. The Middle Ages began in 500 CE and ended around 1500 CE with the start of the 14th century Renaissance period. This time period saw the rise of the medieval church, which gained more power as monarchies emerged as powerful forces across Europe.
Why Is It Called the Dark Ages?
The term, Dark Ages, refers to the idea that Europe was enveloped in darkness due to a lack of cultural advancement. Many held this belief because there was little evidence to prove otherwise in the Western European world. After the Western European Roman Empire rule, feudalism emerged, and the Catholic Church gained power. People were also quite fearful and superstitious about all of life and authority. Advancement of culture, science, and mathematics seemingly halted with the change of power. The Renaissance period, which followed the Middle Ages, tells us more about the Dark Ages than the actual time period itself. Renaissance thinkers revived interest in Greek and Roman philosophy, considering them to be greater thinkers than the European thinker of the Dark Ages.
Petrarch and the Renaissance
Who coined the phrase, Dark Ages? A scholar and poet named Petrarch, who lived from 1304-1374 CE is known for creating this term. He referred to the time period as '' those men of intellectual prowess as living cloaked in darkness. '' In contrast to the dark ages, the Renaissance was a time period historians refer to as returning to humanity and valuing advancement.
The Dark Ages in Europe: History
Throughout the Dark Ages in Europe, the fall of the Roman Empire greatly impacted the migration of humans across Western Europe. This time period was once thought to be a time lacking in cultural, scientific, and economic growth, but it was an impactful time for Europe itself. After the Roman Empire transformed into one of Germanic traditions, kingdoms and monarchs rose to prominence. Political policy, rulers, and religion became the driving forces of European culture. Christianity, more specifically, Catholicism ruled the system of churches. This time period produced splendid Gothic art and architecture and also witnessed the Crusades.
Gilded art work remains as tangible evidence of the dark ages.
What Caused the Dark Ages?
The cause of the Dark Ages is associated with a series of events related to the downfall of the Roman Empire. In 395 CE after the death of Emperor Theodosius, the Roman Empire was divided in half. In 410 CE, the Visigoths entered Rome and destroyed much of the city, to the extent that it was never the same. This continued through 455 CE. In 476 CE Odoacer, a German ruler, removed Emperor Romulus Augustulus and made himself the king of Italy. This moment in history is cited as the demise of the Western Roman Empire. In 481 CE, Clovis took the throne in France and in 496 CE converted to Christianity. This led to forming a relationship with the Pope. A church and state bond began and continued throughout the entirety of the Dark Ages.
Notable Events
There were many events that transpired during this period, but two are quite notable. The Black Death or Plague began in 1347 CE, after trade ships arrived in the port of Messina, Italy. The ships arrived from the Black Sea, bringing with them disease that would kill millions of European citizens. The ships docked at the Sicilian port of Messina and citizens of the port were shocked to find mostly dead and deathly ill sailors on board the ships. This plague ravaged Europe causing the death of over twenty million people.
The fall of the Eastern Roman Empire, or Byzantium is another important event from the Dark Ages. The Byzantium Empire lasted from approximately 330 CE to 1453 CE in Eastern Europe. Many ideas from the Byzantium Empire continue to impact modern thought. The Byzantium Empire ended when the Ottomans entered Constantinople, the land wall fell, and a fifty-five-day blockade transpired, crumbling the empire.
The End of the Dark Ages
Historians believe that the Dark Ages ended when Constantinople, which was the capital city of the Byzantine Empire, fell to the Ottoman Empire. The city had been under attack for two months before it fell to the Ottoman Empire in 1453 CE. After the shift of power ruling Western Europe, peasant life began to change as well. Feudalism had ruled the working class and it began to shift with the rise of crop development and the introduction of the heavy plow. With the increase of food supply, the population in Western Europe began to expand. People moved to port cities and out of rural lands. A slow rise in development led to more culture change in the population of Western Europe.
Problems with the Concept
The term, Dark Ages, is often seen as problematic by contemporary historians for a few reasons. The term itself leads people to believe that there was no development of society throughout Western Europe during this time. This is incorrect. Scant historical documentation and other forms of written evidence left scholars with very little resource material to study. There were developments in the Byzantine Empire, as well as in Asia and the Middle East during the Dark Ages. Looking at both Western and European culture, scholars can find evidence of the cultural development that occurred there during this time as well.
Anachronism
The phrase, Dark Ages, is anachronistic in some ways. An examination of the time period reveals that there was cultural development in various places. The Carolingian Renaissance, which was around 800 CE is one example. Charlemagne ruled the Carolingian Empire and united much of Europe. He saved literature and other works of cultural advancement from ruin. Many ancient texts were copied, archived, and kept by scholars during his rule.
Eurocentrism
Eurocentrism is the idea of centering a topic around European history and culture. The ideology behind what had been found by scholars of the Dark Ages had primarily focused on Eurocentrism. The concept of the Dark Ages is firmly rooted in Western European history. There were many cultural advancements in society that occurred outside of Western Europe during the Dark Ages. When a scholar has limited the view with a focus on Western Europe, they have committed Eurocentrism.
Lesson Summary
The Dark Ages in Europe occurred between 500 CE and 1000 CE or 500 to 1500 AD. The term, Dark Ages was coined by the scholar, Petrarch, during the Renaissance. This time period began after the fall of the Western Roman Empire. The Dark Ages were called that name due to a supposed period of decline in culture and science. There was little written documentation from the period to prove otherwise. The Dark Ages were characterized by feudalism, the introduction of the plow to farming, and the Black Death plague. The time period enveloped Western Europeans in superstition and fear. The Catholic Church became a pivotal force that affected the lives of all who dwelled on the lands. The term the Dark Ages was controversial among scholars because it did not take into account the advancements made outside of Europe, for example, in the Arab world. The fall of Constantinople to the Ottoman Empire, in 1453 CE, marks the end of the dark ages.
The Middle Ages time period took place from 500 CE to 1500 CE in Europe. This was a time in history that fell between the end of the Roman Empire and the modern format of European lands. It was also a time period filled with famine, plague or the Black Death, and war, such as the Crusades. The Middle Ages ended with the start of the 14th century and the Renaissance time period.
What happened during the Dark Ages?
The dark ages occurred from 500 CE to 1000 CE in Western Europe. It began with the fall of the Roman Empire and ended with the start of the Renaissance.
Why is it called the Middle Ages?
It is called the Middle Ages because it was the time period that fell between the end of the Roman Empire and what is considered the modern form of Europe. The dates are 500 CE to 1500 CE
What are the characteristics of the Middle Ages?
Characteristics of the Middle Ages include the expression of feudalism and the strengthening of the church and state relationship, with Catholicism spreading throughout Europe. The MIddle Ages also witness the Crusades, as well as the Black Death Plague, which killed millions of people.
Why is the Dark Ages important?
The Dark Ages were important because they gave rise to the Renaissance period and its focus on cultural advancement for humanity. During the Dark Ages, humans faced famine, disease, war, and the demise of government. | The Middle Ages began in 500 CE and ended around 1500 CE with the start of the 14th century Renaissance period. This time period saw the rise of the medieval church, which gained more power as monarchies emerged as powerful forces across Europe.
Why Is It Called the Dark Ages?
The term, Dark Ages, refers to the idea that Europe was enveloped in darkness due to a lack of cultural advancement. Many held this belief because there was little evidence to prove otherwise in the Western European world. After the Western European Roman Empire rule, feudalism emerged, and the Catholic Church gained power. People were also quite fearful and superstitious about all of life and authority. Advancement of culture, science, and mathematics seemingly halted with the change of power. The Renaissance period, which followed the Middle Ages, tells us more about the Dark Ages than the actual time period itself. Renaissance thinkers revived interest in Greek and Roman philosophy, considering them to be greater thinkers than the European thinker of the Dark Ages.
Petrarch and the Renaissance
Who coined the phrase, Dark Ages? A scholar and poet named Petrarch, who lived from 1304-1374 CE is known for creating this term. He referred to the time period as '' those men of intellectual prowess as living cloaked in darkness. '' In contrast to the dark ages, the Renaissance was a time period historians refer to as returning to humanity and valuing advancement.
The Dark Ages in Europe: History
Throughout the Dark Ages in Europe, the fall of the Roman Empire greatly impacted the migration of humans across Western Europe. This time period was once thought to be a time lacking in cultural, scientific, and economic growth, but it was an impactful time for Europe itself. After the Roman Empire transformed into one of Germanic traditions, kingdoms and monarchs rose to prominence. Political policy, rulers, and religion became the driving forces of European culture. Christianity, more specifically, Catholicism ruled the system of churches. This time period produced splendid Gothic art and architecture and also witnessed the Crusades.
Gilded art work remains as tangible evidence of the dark ages.
| yes |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://www.historyhit.com/why-were-the-early-middle-ages-called-the-dark-ages/ | Why Was 900 Years of European History Called 'the Dark Ages ... | Why Was 900 Years of European History Called ‘the Dark Ages’?
26 Oct 2022
This educational video is a visual version of this article and presented by Artificial Intelligence (AI). Please see our AI ethics and diversity policy for more information on how we use AI and select presenters on our website.
The ‘Dark Ages’ were between the 5th and 14th centuries, lasting 900 years. The timeline falls between the fall of the Roman Empire and the Renaissance. It has been called the ‘Dark Ages’ because many suggest that this period saw little scientific and cultural advancement. However, the term doesn’t stand up to much scrutiny – and many medieval historians have dismissed it.
Why is it called the Dark Ages?
Francesco Petrarca (known as Petrarch) was the first person to coin the term ‘Dark Ages’. He was an Italian scholar of the 14th century. He called it the ‘Dark Ages’ as he was dismayed at the lack of good literature at that time.
The classical era was rich with apparent cultural advancement. Both Roman and Greek civilisations had provided the world with contributions to art, science, philosophy, architecture and political systems.
Granted, there were aspects of Roman and Greek society and culture that were very unsavoury (Gladiatorial combat and slavery to name a few), but after Rome’s fall and subsequent withdrawal from power, European history is portrayed as taking a ‘wrong turn’.
After Petrarch’s disparagement of the ‘dark age’ of literature, other thinkers of the time expanded this term to encompass this perceived dearth of culture in general across Europe between 500 to 1400. These dates are under constant scrutiny by historians as there is a degree of overlap in dates, cultural and regional variations and many other factors. The time is often referred to with terms like the Middle-Ages or Feudal Period (another term that is now contentious amongst medievalists).
Later on, as more evidence came to light after the 18th century, scholars started to restrict the term ‘Dark Ages’ to the period between the 5th and 10th centuries. This period came to be referred to as the Early Middle Ages.
Busting the ‘Dark Ages’ myth
Labelling this large period of history as a time of little cultural advancement and its peoples as unsophisticated is, however, a sweeping generalisation and regularly considered to be incorrect. Indeed, many argue that ‘the Dark Ages’ never truly happened.
In a time epitomised by extensive increases in Christian missionary activity, it appears Early Middle Age kingdoms lived in a very interconnected world.
The early English Church for instance relied heavily on priests and bishops who had trained abroad. In the late 7th century, the archbishop Theodore founded a school at Canterbury that would go on to become a key centre of scholarly learning in Anglo-Saxon England. Theodore himself had originated from Tarsus in south-eastern Asia Minor (now south-central Turkey) and had trained in Constantinople.
People were not just travelling to Anglo-Saxon England however. Anglo-Saxon men and women were also regular sights in mainland Europe. Nobles and commoners went on frequent and often perilous pilgrimages to Rome and even further afield. A record even survives of Frankish observers complaining about a monastery in Charlemagne‘s kingdom that was run by an English abbot called Alcuin:
“O God, deliver this monastery from these Britons who come swarming around this countryman of theirs like bees returning to their queen.”
International trade
Trade too reached far and wide during the Early Middle Ages. Certain Anglo-Saxon coins have European influences, visible in two gold Mercian coins. One coin dates to the reign of King Offa (r. 757–796). It is inscribed with both Latin and Arabic and is a direct copy of coinage minted by the Islamic Abbasid Caliphate based in Baghdad.
The other coin portrays Coenwulf (r. 796–821), Offa’s successor, as a Roman emperor. Mediterranean-influenced gold coins such as these probably reflect extensive international trade.
The early Middle Age kingdoms thus lived in a very interconnected world and from this sprung many cultural, religious and economic developments.
Raban Maur (left), supported by Alcuin (middle), dedicates his work to Archbishop Otgar of Mainz (Right)
Image Credit: Fulda, Public domain, via Wikimedia Commons
The Early Middle Age renaissance of literature and learning
Developments in learning and literature did not disappear during the Early Middle Ages. In fact, it appears it was quite the opposite: literature and learning was highly valued and encouraged in many Early Middle Age kingdoms.
During the late eighth and early ninth centuries for instance, the Emperor Charlemagne’s court became the centre for a renaissance of learning that ensured the survival of many classical Latin texts as well as generating much that was new and distinctive.
Across the Channel in England, around 1300 manuscripts survive dating to before 1100. These manuscripts focus on a wide array of topics: religious texts, medicinal remedies, estate management, scientific discoveries, travels to the continent, prose texts and verse texts to name a few.
Monasteries were centres of production for most of these manuscripts during the Early Middle Ages. They were created by either priests, abbots, archbishops, monks, nuns or abbesses.
It is notable that women had a significant role in literature and learning at this time. An eighth century abbess of Minster-in-Thanet called Eadburh taught and produced poetry in her own verse, while an English nun called Hygeburg recorded a pilgrimage to Jerusalem made by a West-Saxon monk called Willibald at the beginning of the eighth century.
Seb Falk, a historian of medieval science at Cambridge University and the author of The Light Ages, tackles the big questions about science in the Middle Ages.
Many well-off women who were not members of a religious community also had well-documented interests in literature, such as Queen Emma of Normandy, the wife of King Cnut.
It appears literature and learning did suffer upon the arrival of the Vikings during the ninth century (something which King Alfred the Great famously bemoaned). But this lull was temporary and it was followed by a resurgence in learning.
The painstaking work required to create these manuscripts meant that they were highly-cherished by the elite class in Early Middle Age Christian Europe; owning literature became a symbol of power and wealth.
Fully debunked?
There is plenty of evidence to negate Petrarch’s view that the Early Middle Ages was a dark age of literature and learning. In fact, it was a time where literature was encouraged and highly-valued, especially by the upper-echelons of Early Middle Age society.
The term ‘the Dark Ages’ gained greater usage during the 18th century Enlightenment, when many philosophers felt the religious dogma of the Medieval period did not sit well within the new ‘Age of Reason.’
They saw the Middle Ages as ‘dark’ for both its lack of records, and the central role of organised religion, contrasting against the lighter periods of antiquity and the Renaissance.
During the 20th century, many historians have rejected the term, arguing that there is a sufficient amount of scholarship and understanding of the Early Middle Ages to make it redundant. However, the term is still used in popular culture and regularly referred to.
It will take time for the term the ‘Dark Ages’ to fully fall out of use but it is clear that it is an outdated and pejorative term for a period where art, culture and literature flourished across Europe. | Both Roman and Greek civilisations had provided the world with contributions to art, science, philosophy, architecture and political systems.
Granted, there were aspects of Roman and Greek society and culture that were very unsavoury (Gladiatorial combat and slavery to name a few), but after Rome’s fall and subsequent withdrawal from power, European history is portrayed as taking a ‘wrong turn’.
After Petrarch’s disparagement of the ‘dark age’ of literature, other thinkers of the time expanded this term to encompass this perceived dearth of culture in general across Europe between 500 to 1400. These dates are under constant scrutiny by historians as there is a degree of overlap in dates, cultural and regional variations and many other factors. The time is often referred to with terms like the Middle-Ages or Feudal Period (another term that is now contentious amongst medievalists).
Later on, as more evidence came to light after the 18th century, scholars started to restrict the term ‘Dark Ages’ to the period between the 5th and 10th centuries. This period came to be referred to as the Early Middle Ages.
Busting the ‘Dark Ages’ myth
Labelling this large period of history as a time of little cultural advancement and its peoples as unsophisticated is, however, a sweeping generalisation and regularly considered to be incorrect. Indeed, many argue that ‘the Dark Ages’ never truly happened.
In a time epitomised by extensive increases in Christian missionary activity, it appears Early Middle Age kingdoms lived in a very interconnected world.
The early English Church for instance relied heavily on priests and bishops who had trained abroad. In the late 7th century, the archbishop Theodore founded a school at Canterbury that would go on to become a key centre of scholarly learning in Anglo-Saxon England. Theodore himself had originated from Tarsus in south-eastern Asia Minor (now south-central Turkey) and had trained in Constantinople.
People were not just travelling to Anglo-Saxon England however. Anglo-Saxon men and women were also regular sights in mainland Europe. | no |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://study.com/academy/lesson/the-dark-ages-definition-history-timeline.html | The Dark Ages: Definition, History & Timeline - Video & Lesson ... | The Dark Ages: Definition, History & Timeline
Nate Sullivan holds a M.A. in History and a M.Ed. He is an adjunct history professor, middle school history teacher, and freelance writer.
Traditionally, the Dark Ages was thought to be a time when little cultural development and scientific discovery happened. Learn more about the Dark Ages, and explore the controversy that surrounds the historical timeline of the era.
Updated: 03/18/2023
The Dark Ages is a term for the period of time between the fall of the Roman Empire and the beginning of the Italian Renaissance and the Age of Discovery. Many textbooks list the Dark Ages as extending from 500-1500 AD, although it should be noted these are approximations.
The ancient Greek and Roman civilizations were remarkably advanced for their time. Both civilizations made a number of contributions to human progress, notably in the areas of science, government, philosophy, and architecture. Some scholars perceive Europe as having been plunged into darkness when the Roman Empire fell in around 500 AD. The Middle Ages are often said to be dark because of a supposed lack of scientific and cultural advancement.
During this time, feudalism was the dominant political system. The feudal system of labor hindered upward social mobility, which basically means that poor people had very little opportunity to improve their condition in life. Religious superstition was also widespread during this time. The Catholic Church was extremely institutionalized and often opposed the scientific and cultural advancements the Greeks and Romans had pioneered.
Increasingly, historians are refraining from using the term 'Dark Ages.' Many historians now regard the darkness of the Dark Ages as a common misconception. Recent scholarship has brought to light the scientific and cultural contributions of the Arab world. Historians also recognize the intellectual strength of the medieval scholastics.
The Dark Ages is a categorization commonly used to describe the period between the fall of the Roman Empire and the beginning of the Italian Renaissance and the Age of Exploration. Roughly speaking, the Dark Ages corresponds to the Middle Ages, or from 500 to 1500 AD. This period has traditionally been thought of as dark, in the sense of having very little scientific and cultural advancement.
During this time, feudalism prevented upward social mobility, and the Catholic Church held a firm grip over which worldviews should or should not be espoused. Superstition was widespread. That said, historians are increasingly rethinking the Dark Ages. Many no longer use the term because new scholarship is showing that this era may not have been as dark as had previously been thought. | The Dark Ages: Definition, History & Timeline
Nate Sullivan holds a M.A. in History and a M.Ed. He is an adjunct history professor, middle school history teacher, and freelance writer.
Traditionally, the Dark Ages was thought to be a time when little cultural development and scientific discovery happened. Learn more about the Dark Ages, and explore the controversy that surrounds the historical timeline of the era.
Updated: 03/18/2023
The Dark Ages is a term for the period of time between the fall of the Roman Empire and the beginning of the Italian Renaissance and the Age of Discovery. Many textbooks list the Dark Ages as extending from 500-1500 AD, although it should be noted these are approximations.
The ancient Greek and Roman civilizations were remarkably advanced for their time. Both civilizations made a number of contributions to human progress, notably in the areas of science, government, philosophy, and architecture. Some scholars perceive Europe as having been plunged into darkness when the Roman Empire fell in around 500 AD. The Middle Ages are often said to be dark because of a supposed lack of scientific and cultural advancement.
During this time, feudalism was the dominant political system. The feudal system of labor hindered upward social mobility, which basically means that poor people had very little opportunity to improve their condition in life. Religious superstition was also widespread during this time. The Catholic Church was extremely institutionalized and often opposed the scientific and cultural advancements the Greeks and Romans had pioneered.
Increasingly, historians are refraining from using the term 'Dark Ages.' Many historians now regard the darkness of the Dark Ages as a common misconception. Recent scholarship has brought to light the scientific and cultural contributions of the Arab world. Historians also recognize the intellectual strength of the medieval scholastics.
The Dark Ages is a categorization commonly used to describe the period between the fall of the Roman Empire and the beginning of the Italian Renaissance and the Age of Exploration. Roughly speaking, the Dark Ages corresponds to the Middle Ages, or from 500 to 1500 AD. | no |
Archaeology | Were there Dark Ages in the Middle Ages? | yes_statement | there were "dark" ages in the middle ages.. the middle ages experienced a period known as the "dark" ages. | https://www.europeana.eu/en/blog/the-not-so-dark-middle-ages | The not so dark Middle Ages | Europeana | The not so dark Middle Ages
For centuries, the terms ‘Dark Ages’ and ‘Middle Ages’ have been synonymous. Until very recently, they were used almost interchangeably to label a period ranging roughly from the fall of the Roman Empire (in the second half of the 5th century) to as early as the mid-13th century or as late as the first half of the 16th century. ‘The Dark Ages’ is a particularly loaded label, however. In fact, it is a value judgement, and, as with all value judgments, the extent of its ‘darkness’ is very much in the eye of the beholder.
In fact, for centuries, the Middle Ages have been referred to as an era of barbarism and economic, cultural and intellectual decline. This myth is so deeply rooted in Western culture that even to this day, when something is considered to be brutal, unsophisticated or outdated, one might describe it as being in the ‘Dark Ages’ or as being ‘positively medieval’. Today, most modern scholars agree that the ‘Dark Ages’ refer to a long and complex period of history, whose perceived ‘darkness’ throughout early modern times has depended heavily on changing political, ideological and religious pursuits and that, on the contrary, the Middle Ages were an era of great inventiveness during which art, architecture, literature, international trade and culture flourished.
Why the ‘dark’ ages?
The idea of a dark intermediary period between the Roman Empire and the Renaissance came from the mid-14th century Italian scholar, Petrarch, who divided history into two periods: the classical period in which Greeks and Romans brightened the world with their intellectual achievements, and a period of darkness and cultural stagnation (in which he himself felt to be living).
Although ‘medieval’ people saw themselves as a continuation of Antiquity, the idea of intellectual darkness was not new to them. 9th-century Carolingian scholar, Walahfrid Strabo (a Latinist and a teacher), thought that the Carolingian Renaissance led by the emperor Charlemagne had been a bright period of learning and intellectual development, illuminating the darkness that had preceded it. However, having written many years after the Emperor’s death and during a civil war, Strabo regretted (as Petrarch would do centuries later) the decline of knowledge in his own ‘barbarous age’, which, according to him, was growing ever dimmer. Contrary to Strabo’s general pessimism about his own times, Petrarch did hope that ancient civilisation would be recovered and that he might even live to witness it.
During the Renaissance (the transition from the Middle Ages to modernity covering the 15th and 16th centuries), Petrarch’s idea of a dark and barbaric medieval past fed into humanists’ belief in their own present time as the rebirth of classical culture. This belief had been conditioned by the words of Giorgio Vasari, a 16th-century artist and art historian, who considered that Roman art had been the best and most divine of any other. Humanists, like Giorgio Vasari, believed that the period that had preceded them had been a dark intermediary time between the higher valued Classical Antiquity and the Renaissance. The ‘Middle Ages’ then, was a period that brought about the loss of the great intellectual achievements of Antiquity.
While humanists criticised what from their point of view was a lack of Latin language, literature and culture, 16th-century Protestant reformers did not see the ‘Dark Ages’ as being problematic since to them this period represented the rise and expansion of the Catholic Church and of papal and clerical corruption.
During the 18th century, these criticisms were what led the ‘Dark Ages’ to become the Enlightenment’s worst enemy.
The Enlightenment
As an intellectual and philosophical movement, the Enlightenment (late 17th and into the 18th centuries) was founded on the ideals of the pursuit of knowledge and happiness, reason, progress, light, and freedom. It was precisely the concept of democracy and freedom that led the German art historian, Johann Joachim Winckelmann, to bring the superiority of Greek art to the forefront, rejecting all art created in contexts of tyrannical rule. The Middle Ages, ruled by the Catholic Church and the monarchy, were seen to be obscure, full of superstition, hierarchy and serfdom. The new age of Enlightenment was a complete contrast and brought its own style of artistic expression - 'Neoclassicism' - that evoked the earlier and superior forms of art and thinking.
Below: The last meeting of the legendary Scandinavian princess Hillelil and her lover, the English prince Hildebrand, before he went off to battle and perished.
Reinterpreting the ‘Dark Ages’
In the final years of the 18th century, the ‘Dark Ages’ started to take on new meanings.
The uncertainties caused by the French Revolution and the world's rapid industrialisation reversed the negative reputation of the Middle Ages. Many European nations began to see them as a time in which their national identities were founded and they could envision their political future. The rediscovery of founding myths, courtly literature and religious art made for a romanticised, exotic and nostalgic ‘Dark Ages’. Some people saw it as a time of impressive architecture and others as an age of harmony, chivalry and faith.
Coincidentally, this was also a time in which travellers started to re-discover (with romantic delight) the ruins of gothic churches and cathedrals, languishing and decaying with the passage of time. This general appreciation of the ‘Dark Ages’ in literature, (and later) art and architecture became widespread in the third quarter of the 19th century, and was not based as much on actual knowledge and appreciation of medieval culture and inventiveness as on a gothic taste for all that was morbid, quaint, sentimental and obscure.
20th century research
The 20th century called into question the idea of the ‘Dark Ages’. Scholars began to study all aspects of medieval society and culture, gradually unveiling what, for centuries, had been a millennium we knew very little about. Their studies identified many periods of political, social, intellectual and economic Renaissance during the Middle Ages, and revealed that the philosophical and scientific roots of ‘The Renaissance of the Twelfth Century’ in particular, actually laid the foundations for the achievements of the Italian Renaissance and for the 17th century Scientific Revolution.
Below: This coin is inscribed in both Latin and Arabic. It was made for Offa (reigned 757-796), king of Mercia, and the design was copied directly from a dinar coin of the Abbasid caliph al-Mansur (754-775)
The 20th century also revealed a profoundly inter-connected world. Medieval international trade, for example, was so extensive that some 8th-century Anglo-Saxon coins were inscribed in both Latin and Arabic, revealing far-reaching connections. Christian missionary activity was also widespread, taking ancient knowledge to monasteries far and wide.
Medieval manuscripts and a new book trade
Tight links with the East led to the transmission of texts that were translated from Greek to Latin, copied and then studied. During the Carolingian Renaissance, these translations were reviewed and corrected, ensuring the preservation of classical texts. Education, knowledge and art became a driving force in recreating the splendours of Antiquity while building a unified Christian Empire. In fact, classical literature and philosophy were not ‘lost’ at all during the Middle Ages, they were just re-interpreted under the lens of Christianity and focused towards the most important of medieval pursuits: salvation.
Below: The opening page of the Gospel of Saint Matthew from a copy of the Four Gospels, made during the reign of the Emperor Charlemagne. It is written entirely in gold with headings in red.
The sheer amount of effort (and cost) that went into making and decorating a medieval manuscript reflects a profound appreciation for knowledge and the reader’s expectation of being ‘illuminated’ by the wisdom contained within the page. In fact, manuscripts produced during this period were of an immensely rich variety. Copies and translations were made of classical, historical, theological and liturgical texts. The latter in particular, because they contained the Holy Script, could boast the most intensely rich colours and gold ink, and created astonishing effects of fluctuating light. Owning and collecting one of these books was not only a sign of great culture but of power and wealth.
While it is true that only a small fraction of the population had access to written texts (or was capable of reading them), medieval knowledge and culture existed in multiple forms.
To begin with, the 11th and 12th centuries saw the foundation of the first universities (perhaps the most successful and lasting of medieval inventions) in cities such as Bologna, Oxford, Salamanca and Paris. University cities fostered a rising new book trade that laid out the foundations for the modern day printed book. Paintings and illuminations were also a means for acquiring knowledge. Bursting with colour and gold leaf, they invited the mind to wander (quite literally). Certain manuscripts told of the voyages and adventures of great explorers such as Marco Polo, depicting bizarre beings that populated the farthest corners of the earth (and yes, medieval people were aware that the earth was round!). The medieval world was a big and bustling place, and seals and coins enforced ideas of law and order as well as territorial identity.
Below: Bizarre mythical beings said to have been discovered by Marco Polo during his voyages through Asia
The Middle Ages were also a time of profound faith. Great knowledge, engineering and innovation went into building churches and cathedrals that have, effortlessly, stood the test of time. These medieval skyscrapers and their giant stained glass windows, have narrated sacred history while inundating space with light and colour. The so-called ‘Dark Ages’ were, in fact, not that dark after all.
This blog is part of the Art of Reading in the Middle Ages project which explores how medieval reading culture evolved and became a fundamental aspect of European culture. | The not so dark Middle Ages
For centuries, the terms ‘Dark Ages’ and ‘Middle Ages’ have been synonymous. Until very recently, they were used almost interchangeably to label a period ranging roughly from the fall of the Roman Empire (in the second half of the 5th century) to as early as the mid-13th century or as late as the first half of the 16th century. ‘The Dark Ages’ is a particularly loaded label, however. In fact, it is a value judgement, and, as with all value judgments, the extent of its ‘darkness’ is very much in the eye of the beholder.
In fact, for centuries, the Middle Ages have been referred to as an era of barbarism and economic, cultural and intellectual decline. This myth is so deeply rooted in Western culture that even to this day, when something is considered to be brutal, unsophisticated or outdated, one might describe it as being in the ‘Dark Ages’ or as being ‘positively medieval’. Today, most modern scholars agree that the ‘Dark Ages’ refer to a long and complex period of history, whose perceived ‘darkness’ throughout early modern times has depended heavily on changing political, ideological and religious pursuits and that, on the contrary, the Middle Ages were an era of great inventiveness during which art, architecture, literature, international trade and culture flourished.
Why the ‘dark’ ages?
The idea of a dark intermediary period between the Roman Empire and the Renaissance came from the mid-14th century Italian scholar, Petrarch, who divided history into two periods: the classical period in which Greeks and Romans brightened the world with their intellectual achievements, and a period of darkness and cultural stagnation (in which he himself felt to be living).
Although ‘medieval’ people saw themselves as a continuation of Antiquity, the idea of intellectual darkness was not new to them. 9th-century Carolingian scholar, Walahfrid Strabo (a Latinist and a teacher), thought that the Carolingian Renaissance led by the emperor Charlemagne had been a bright period of learning and intellectual development, illuminating the darkness that had preceded it. However, having written many years after the Emperor’ | no |
Archaeology | Were there Dark Ages in the Middle Ages? | no_statement | there were no "dark" ages in the middle ages.. the middle ages did not have a period referred to as the "dark" ages. | https://en.wikipedia.org/wiki/Middle_Ages | Middle Ages - Wikipedia | The Late Middle Ages was marked by difficulties and calamities including famine, plague, and war, which significantly diminished the population of Europe; between 1347 and 1350, the Black Death killed about a third of Europeans. Controversy, heresy, and the Western Schism within the Catholic Church paralleled the interstate conflict, civil strife, and peasant revolts that occurred in the kingdoms. Cultural and technological developments transformed European society, concluding the Late Middle Ages and beginning the early modern period.
Terminology and periodisation
The Middle Ages is one of the three major periods in the most enduring scheme for analysing European history: Antiquity, the Middle Ages and the Modern Period.[2] A similar term first appears in Latin in 1469 as media tempestas ('middle season').[3] The adjective medieval,[A][5] meaning pertaining to the Middle Ages, derives from medium aevum ('middle age'),[4] a Latin term first recorded in 1604.[6]Leonardo Bruni was the first historian to use tripartite periodisation in his History of the Florentine People (1442),[7] and it became standard with 17th-century German historian Christoph Cellarius.[8]
Medieval writers divided history into periods such as the Six Ages or the Four Empires, and considered their time to be the last before the end of the world.[9] In their concept, their age had begun when Christ had brought light to mankind, contrasted with the spiritual darkness of previous periods. The Italian humanist and poet Petrarch (d. 1374) turned the metaphor upside down, stating that the age of darkness had begun when emperors of non-Italian origin assumed power in the Roman Empire.[10]
The most commonly given starting point for the Middle Ages is around 500,[11] with 476—the year the last Western Roman Emperor was deposed—first used by Bruni.[7] For Europe as a whole, 1500 is often considered to be the end of the Middle Ages,[12] but there is no universally agreed-upon end date. Depending on the context, events such as the conquest of Constantinople by the Turks in 1453, Christopher Columbus's first voyage to the Americas in 1492, or the Protestant Reformation in 1517 are sometimes used.[13] English historians often use the Battle of Bosworth Field in 1485 to mark the end of the period.[14]
Historians from Romance language-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low" period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into three intervals: "Early", "High", and "Late".[2] In the 19th century, the entire Middle Ages were often referred to as the Dark Ages, but with the adoption of these subdivisions, use of this term was restricted to the Early Middle Ages in the early 20th century.[15]
Later Roman Empire
The Roman Empire reached its greatest territorial extent during the 2nd century AD; the following two centuries witnessed the slow decline of Roman control over its outlying territories.[17] Runaway inflation, external pressure on the frontiers, and outbreaks of plague combined to create the Crisis of the Third Century, with emperors coming to the throne only to be rapidly replaced by new usurpers.[18] Military expenses steadily increased, mainly in response to the war with the Sasanian Empire.[19] The army doubled in size, and cavalry and smaller units replaced the legion as the main tactical unit.[20] The need for revenue led to increased taxes and a decline in numbers of the curial, or landowning, class.[19] More bureaucrats were needed in the central administration to deal with the needs of the army, which led to complaints from civilians that there were more tax-collectors in the empire than tax-payers.[20]
For much of the 4th century, Roman society stabilised in a new form that differed from the earlier classical period, with a widening gulf between the rich and poor, and a decline in the vitality of the smaller towns.[24] Another change was the Christianisation, or conversion of the empire to Christianity. The process was accelerated by the conversion of Constantine the Great, and Christianity emerged as the empire's dominant religion by the end of the century.[25] Debates about Christian theology intensified, and those who persisted with theological views condemned at the ecumenical councils faced persecution. Such heretic views survived through intensive proselytizing campaigns outside the empire, or due to local ethnic groups' support in the eastern provinces, like Arianism among the Germanic peoples, or Monophysitism in Egypt and Syria.[26][27]Judaism remained a tolerated religion, although legislation limited Jews' rights.[28]
Civil war between rival emperors became common in the middle of the 4th century, diverting soldiers from the empire's frontier forces and allowing invaders to encroach.[29] Although the movements of peoples during this period are usually described as "invasions", they were not just military expeditions but migrations into the empire.[30] In 376, hundreds of thousands of Goths, fleeing from the Huns, received permission from Emperor Valens (r. 364–378) to settle in Roman territory in the Balkans. The settlement did not go smoothly, and when Roman officials mishandled the situation, the Goths began to raid and plunder.[B] Valens, attempting to put down the disorder, was killed fighting the Goths at the Battle of Adrianople on 9 August 378.[32] The Visigoths, a Gothic group, invaded the Western Roman Empire in 401; the Alans, Vandals, and Suevi crossed into Gaul in 406, and into modern-day Spain in 409. A year later the Visigoths sacked the city of Rome.[33][34] The Franks, Alemanni, and the Burgundians all ended up in Gaul while the Angles, Saxons, and Jutessettled in Britain,[35] and the Vandals conquered the province of Africa.[36] The Hunnic king Attila (r. 434–453) led invasions into the Balkans in 442 and 447, Gaul in 451, and Italy in 452. The Hunnic threat remained until Attila's death in 453, when the Hunnic confederation he led fell apart.[37]
When dealing with the migrations, the Eastern Roman elites combined the deployment of armed forces with gifts and grants of offices to the tribal leaders, whereas the Western aristocrats failed to support the army but also refused to pay tribute to prevent invasions by the tribes.[30] These invasions led to the division of the western section of the empire into smaller political units, ruled by the tribes that had invaded.[38] The emperors of the 5th century were often controlled by military strongmen such as Stilicho (d. 408), Aetius (d. 454), Aspar (d. 471), Ricimer (d. 472), or Gundobad (d. 516), who were partly or fully of non-Roman ancestry.[39] The deposition of the last emperor of the west, Romulus Augustulus, in 476 has traditionally marked the end of the Western Roman Empire.[40][C] The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its western counterpart, had little ability to assert control over the lost western territories although the Byzantine emperors maintained a claim over the territory.[41]
Post-Roman kingdoms
In the post-Roman world, the fusion of Roman culture with the customs of the invading tribes is well documented. Popular assemblies that allowed free male tribal members more say in political matters than had been common in the Roman state developed into legislative and judicial bodies.[42] Material artefacts left by the Romans and the invaders are often similar, and tribal items were often modelled on Roman objects.[43] Much of the scholarly and written culture of the new kingdoms was also based on Roman intellectual traditions.[44] Many of the new political entities no longer supported their armies through taxes, instead relying on granting them land or rents. This meant there was less need for large tax revenues and so the taxation systems decayed.[45]
The Germanic groups now collectively known as Anglo-Saxons settled in Britain before the middle of the 5th century. The local culture had little impact on their way of life, but the linguistic assimilation of masses of the local Celtic Britons to the newcomers is evident. By around 600, new political centres emerged, some local leaders accumulated considerable wealth, and a number of small kingdoms such as Wessex and Mercia were formed. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and Picts.[46] Ireland was divided into even smaller political units, perhaps as many as 150 tribal kingdoms.[47]
The Ostrogoths moved to Italy from the Balkans in the late 5th century under Theoderic the Great (r. 493–526). He set up a kingdom marked by its co-operation between the Italians and the Ostrogoths until the last years of his reign. Power struggles between Romanized and traditionalist Ostrogothic groups followed his death, providing the opportunity for the Byzantines to reconquer Italy in the middle of the 6th century.[48] The Burgundians settled in Gaul, and after an earlier realm was destroyed by the Huns in 436, formed a new kingdom in the 440s.[49] Elsewhere in Gaul, the Franks and Celtic Britons set up stable polities. Francia was centred in northern Gaul, and the first king of whom much is known is Childeric I (d. 481).[D] Under Childeric's son Clovis I (r. 509–511), the founder of the Merovingian dynasty, the Frankish kingdom expanded and converted to Christianity.[51] Unlike other Germanic peoples, the Franks accepted Catholicism which facilitated their cooperation with the native Gallo-Roman aristocracy.[52] Britons fleeing from Britannia – modern-day Great Britain – settled in what is now Brittany.[E][53]
The settlement of peoples was accompanied by changes in languages. Latin, the literary language of the Western Roman Empire, was gradually replaced by vernacular languages which evolved from Latin, but were distinct from it, collectively known as Romance languages. Greek remained the language of the Byzantine Empire, but the migrations of the Slavs expanded the area of Slavic languages in Central and Eastern Europe.[57]
During this period the Eastern Roman Empire remained intact and experienced an economic revival that lasted into the early 7th century. Here political life was marked by closer relations between the political state and Christian Church, with doctrinal matters assuming an importance in Eastern politics that they did not have in Western Europe. Legal developments included the codification of Roman law; the first effort – the Codex Theodosianus – was completed in 438.[59] Under Emperor Justinian (r. 527–565), a more comprehensive compilation took place, the Corpus Juris Civilis.[60]
Justinian almost lost his throne during the Nika riots, a popular revolt of elementary force that destroyed half of Constantinople in 532. After crushing the revolt, he reinforced the autocratic elements of the imperial government and mobilized his troops against the Arian western kingdoms. The general Belisarius (d. 565) conquered North Africa from the Vandals, and attacked the Ostrogoths, but the Italian campaign was interrupted due to an unexpected Sasanian invasion from the east. Between 541 and 543, a deadly outbreak of plague decimated the empire's population. Justinian ceased to finance the maintenance of public roads, and covered the lack of military personnel by developing and extensive system of border forts. In a decade, he resumed expansionism, completing the conquest of the Ostrogothic kingdom, and seizing much of southern Spain from the Visigoths.[61]
Justinian's reconquests and excessive building program have been criticised by historians for bringing his realm to the brink of bankruptcy, but many of the difficulties faced by Justinian's successors were due to other factors, including the epidemic and the massive expansion of the Avars and their Slav allies.[62] In the east, border defences collapsed during a new war with the Sasanian Empire, and the Persians seized large chunks of the empire, including Egypt, Syria, and much of Anatolia. In 626, the Avars and Slavs attacked Constantinople. Two years later, Emperor Heraclius (r. 610–641) launched an unexpected counterattack against the heart of the Sassanian Empire bypassing the Persian army in the mountains of Anatolia; the empire recovered all of its lost territories in the east.[63]
Western society
In Western Europe, some of the older Roman elite families died out while others became more involved with ecclesiastical than secular affairs. Values attached to Latin scholarship and education mostly disappeared. While literacy remained important, it became a practical skill rather than a sign of elite status. By the late 6th century, the principal means of religious instruction in the Church had become music and art rather than the book.[64] Most intellectual efforts went towards imitating classical scholarship, but some original works were created, along with now-lost oral compositions. The writings of Sidonius Apollinaris (d. 489), Cassiodorus (d. c. 585), and Boethius (d. c. 525) were typical of the age.[65] Aristocratic culture focused on great feasts held in halls rather than on literary pursuits. Family ties within the elites were important, as were the virtues of loyalty, courage, and honour. These ties led to the prevalence of the feud in aristocratic society. Most feuds seem to have ended quickly with the payment of some sort of compensation.[66]
Women took part in aristocratic society mainly in their roles as wives and mothers, with the role of mother of a ruler being especially prominent in Merovingian Gaul. In Anglo-Saxon society the lack of many child rulers meant a lesser role for women as queen mothers, but this was compensated for by the increased role played by abbesses of monasteries.[67] Women's influence on politics was particularly fragile, and early medieval authors tended to depict powerful women in a bad light.[F][69] Women usually died at considerably younger age than men, primarily due to infanticide and complications at childbirth.[G] The disparity between the numbers of marriageable women and grown men led to the detailed regulation of legal institutions protecting women's interests, including their right to the Morgengabe, or "morning gift".[71] Early medieval laws acknowledged a man's right to have long-term sexual relationships with women other than his wife, such as concubines, but women were expected to remain faithful. Clerics censured sexual unions outside marriage, and monogamy became also the norm of secular law in the 9th century.[72]
Reconstruction of an early medieval peasant village in Bavaria
Most of the early medieval descriptions of the lower classes come from either law codes or writers from the upper classes. As few detailed written records documenting peasant life remain from before the 9th century, surviving information available to historians comes mainly from archaeology.[73]Landholding patterns were not uniform; some areas had greatly fragmented holdings, but in other areas large contiguous blocks of land were the norm. These differences allowed for a wide variety of peasant societies, some dominated by aristocratic landholders and others having a great deal of autonomy.[74] Land settlement also varied greatly. Some peasants lived in large settlements that numbered as many as 700 inhabitants. Others lived in small groups of a few families or on isolated farms.[75] Legislation made a clear distinction between free and unfree, but there was no sharp break between the legal status of the free peasant and the aristocrat, and it was possible for a free peasant's family to rise into the aristocracy over several generations through military service.[76] Demand for slaves was covered through warring and raids. Initially, the Franks' expansion and conflicts between the Anglo-Saxon kingdoms supplied the slave market with prisoners of war and captives. After the Anglo-Saxons' conversion to Christianity, slave hunters mainly targeted the pagan Slav tribes—hence the English word "slave" from slavicus, the Medieval Latin term for Slavs.[77] Christian ethics brought about significant changes in the position of slaves in the 7th and 8th centuries. They were no more regarded as their lords' property, and their right to a decent treatment was enacted.[78]
Roman city life and culture changed greatly in the early Middle Ages. Although Italian cities remained inhabited, they contracted significantly in size. Rome, for instance, shrank from a population of hundreds of thousands to around 30,000 by the end of the 6th century. In Northern Europe, cities also shrank, while civic monuments and other public buildings were raided for building materials.[79] The Jewish communities survived the fall of the Western Roman Empire in Spain, southern Gaul and Italy. The Visigothic kings made concentrated efforts to convert the Hispanic Jews to Christianity but the Jewish community quickly regenerated after the Muslim conquest.[80] In contrast, Christian legislation forbade the Jews' appointment to government positions.[81]
Religious beliefs were in flux in the lands along the Eastern Roman and Persian frontiers during the late 6th and early 7th centuries. State-sponsored Christian missionaries proselytised among the pagan steppe peoples, and the Persians made attempts to enforce Zoroastrianism on the Christian Armenians. Judaism was an active proselytising faith, and at least one Arab political leader—Dhu Nuwas, ruler of what is today Yemen—converted to it.[82] The emergence of Islam in Arabia during the lifetime of Muhammad (d. 632) brought about more radical changes. After his death, Islamic forces conquered much of the Near East, starting with Syria in 634–35, continuing with Persia between 637 and 642, and reaching Egypt in 640–41. In the eastern Mediterranean, the Eastern Romans halted the Muslim expansion at Constantinople in 674–78 and 717–18. In the west, Islamic troops conquered North Africa by the early 8th century, annihilated the Visigothic Kingdom in 711, and invaded southern France from 713.[83][84]
The Muslim conquerors bypassed the mountainous northwestern region of the Iberian Peninsula. Here a small kingdom, Asturias emerged as the centre of local resistance.[85] The defeat of Muslim forces at the Battle of Tours in 732 led to the reconquest of southern France by the Franks, but the main reason for the halt of Islamic growth in Europe was the overthrow of the Umayyad Caliphate and its replacement by the Abbasid Caliphate. The Abbasids were more concerned with the Middle East than Europe, losing control of sections of the Muslim lands. Umayyad descendants took over Al-Andalus (or Muslim Spain), the Aghlabids controlled North Africa, and the Tulunids became rulers of Egypt.[86]
Trade and economy
Migrations and conquests disrupted trade networks around the Mediterranean. The replacement of goods from long-range trade with local products was a trend throughout the old Roman lands. Non-local goods appearing in the archaeological record are usually luxury goods or metalworks.[87] In the 7th and 8th centuries, new commercial networks were developing in northern Europe. Goods like furs, walrus ivory and amber were delivered from the Baltic region to western Europe, contributing to the development of new trade centers in East Anglia, northern Francia and Scandinavia. Conflicts over the control of trade routes and toll stations were common.[88] The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms.[89]
The flourishing Islamic economies' constant demand for fresh labour force and raw materials opened up a new market for Europe around 750. Europe emerged as a major supplier of house slaves and slave soldiers for Al-Andalus, northern Africa and the Levant. Venice developed into the most important European center of slave trade.[90][91] In addition, timber, fur and arms were delivered from Europe to the Mediterranean, while Europe imported spices, medicine, incense, and silk from the Levant.[92] The large rivers connecting distant regions facilitated the expansion of transcontinental trade.[93] Contemporaneous reports indicate that Anglo-Saxon merchants visited fairs at Paris, pirates preyed on tradesman travelling on the Danube, and Eastern Frankish merchants reached as far as Zaragoza in Al-Andalus.[94]
Church life
The idea of Christian unity endured although differences in ideology and practice between the Eastern and Western Churches became apparent by the 6th century.[95] The formation of new realms reinforced the traditional Christian concept of the separation of church and state in the west, whereas this notion was alien to eastern clergymen who regarded the Roman state as an instrument of divine providence.[95] In the late 7th century, clerical marriage emerged as a permanent focus of controversy. After the Muslim conquests, the Byzantine emperors could less effectively intervene in the west. When Leo III (r. 717–741) prohibited the display of paintings representing human figures in places of worship, the papacy openly rejected his claim to declare new dogmas by imperial edicts.[96] Although the Byzantine Church condemned iconoclasm in 843, further issues such as fierce rivalry for ecclesiastic jurisdiction over newly converted peoples, and the unilateral modification of the Nicene Creed in the west widened to the extent that the differences were greater than the similarities.[97]
Few of the Western bishops looked to the papacy for religious or political leadership. The only part of Western Europe where the papacy had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity.[98]Irish missionaries were most active in Western Europe between the 5th and the 7th centuries.[99] People did not visit churches regularly. Instead, meetings with itinerant clergy and pilgrimages to popular saints' shrines were instrumental in the spread of Christian teaching. Clergymen used special handbooks known as penitentials to determine the appropriate acts of penance—typically prayers, and fasts—for sinners. The Early Middle Ages witnessed the rise of Christian monasticism. Monastic ideals spread through hagiographical literature, especially the Life of Anthony. Most European monasteries were of the type that focuses on community experience of the spiritual life, called cenobitism.[100][101] The Italian monk Benedict of Nursia (d. 547) developed the Benedictine Rule which became widely used in western monasteries.[102][103] In the east, the monastic rules compiled by Theodore the Studite (d. 826) gained popularity after they were adopted in the Great Lavra on Mount Athos in the 960s, setting a precedent for further Athonite monasteries, and turning the mount into the most important centre of Orthodox monasticism.[104]
Monks and monasteries had a deep effect on religious and political life, in various cases acting as land trusts for powerful families and important centres of political authority.[105] They were the main and sometimes only outposts of education and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries.[106] Monks were also the authors of new works, including history, theology, and other subjects, written by authors such as Bede (d. 735), a native of northern England.[107] The Byzantine missionary Constantine (d. 869) developed Old Church Slavonic as a new liturgical language, establishing the basis for flourishing Slavic religious literature; around 900 a new script was adopted, now known for Constantine's monastic name as Cyrillic.[108]
In Western Christendom, lay influence over Church affairs came to a climax in the 10th century. Aristocrats regarded the churches and monasteries under their patronage as their personal property, and simony—the sale of Church offices—was a common practice. Simony aroused a general fear about salvation as many believed that irregularly appointed priests could not confer valid sacraments such as baptism.[109] Monastic communities were the first to react to this fear by the rigorous observance of their rules. The establishment of Cluny Abbey in Burgundy in 909 initiated a more radical change as Cluny was freed from lay control and placed under the protection of the papacy. The Cluniac Reforms spread through the founding of new monasteries and the reform of monastic life in old abbeys.[110] Cluny's example indicated that the reformist idea of the "Liberty of the Church" could be achieved through submission to the papacy.[111]
Carolingian Europe
The Merovingian kings customarily distributed Francia among their sons and destroyed their own power base by extensive land grants. In the northeastern Frankish realm Austrasia, the Arnulfings were the most prominent beneficiaries of royal favour. As hereditary Mayors of the Palace, they were the power behind the Austrasian throne from the mid-7th century. One of them, Pepin of Herstal (d. 714), also assumed power in the central Frankish realm Neustria. His son Charles Martel (d. 741) took advantage of the permanent Muslim threat to confiscate church property and raise new troops by parcelling it out among the recruits.[112]
Pepin left his kingdom in the hands of his two sons, Charles, more often known as Charlemagne or Charles the Great (r. 768–814), and Carloman (r. 768–771). When Carloman died of natural causes, Charlemagne reunited Francia and embarked upon a programme of systematic expansion, rewarding allies with war booty and command over parcels of land. He subjugated the Saxons, conquered the Lombards, and created a new border province in northern Spain.[115] Between 791 and 803, Frankish troops destroyed the Avars which facilitated the development of small Slavic principalities, mainly ruled by ambitious warlords under Frankish suzerainty.[116][H] The coronation of Charlemagne as emperor on Christmas Day 800 marked a return of the Western Roman Empire and asserted the Frankish realm's equivalence to the Byzantine state. In 812, as a result of careful and protracted negotiations, the Byzantines acknowledged Charlemagne's new title but without recognizing him as a second "emperor of the Romans".[118]
The Carolingian Empire was administered by an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts, who administered the counties the empire had been divided into.[119] The central administration supervised the counts through imperial emissaries called missi dominici, who served as roving inspectors and troubleshooters. The clerics of the royal chapel were responsible for recording important royal grants and decisions.[120] Charlemagne's court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule,[I] allowing a common writing style that advanced communication across much of Europe.
Charlemagne sponsored changes in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics and schoolbooks were also produced.[122]Grammarians of the period modified the Latin language, changing it from the Classical Latin of the Roman Empire into a more flexible form now called Medieval Latin.[123]
Breakup of the Carolingian Empire
Charlemagne continued the Frankish tradition of dividing his empire between all his sons, but only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he made Louis co-emperor. Louis's reign was marked by numerous divisions of the empire among his sons, and civil wars between various alliances of father and sons over the control of various parts of the empire.[124]
By the Treaty of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair I to go with his lands in Italy, and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France.[125] Charlemagne's grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion to be lost.[126] There was a brief re-uniting of the empire by Charles the Fat in 884, although the actual units of the empire retained their separate administrations.[127] By the time he died early in 888, the Carolingians were close to extinction, and non-dynastic claimants assumed power in most of the successor states.[128] In the eastern lands the dynasty died out with the death of Louis the Child (r. 899–911), and the selection of the Franconian duke Conrad I (r. 911–918) as king.[129] In West Francia, the dynasty was restored first in 898, then in 936, but the last Carolingian kings were unable to keep the powerful aristocracy under control. In 987 the dynasty was replaced, with the crowning of Hugh Capet (r. 987–996) as king.[J][130] Although the Capetian kings remained nominally in charge, much of the political power devolved to the local lords in medieval France.[131]
Frankish culture and the Carolingian methods of state administration had a significant impact on the neighboring peoples. Frankish threat triggered the formation of new states along the empire's eastern frontier—Bohemia, Moravia, and Croatia.[132] The breakup of the Carolingian Empire was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed by the Vikings, who also raided the British Isles and settled there. In 911, the Viking chieftain Rollo (d. c. 931) received permission from the Frankish king Charles the Simple (r. 898–922) to settle in what became Normandy. The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar assault until the invaders' defeat at the Battle of Lechfeld in 955.[133] In the Mediterranean, Arab pirates launched regular raids against Italy and southern France. The Muslim states also began expanding: the Aghlabids conquered Sicily, and the Umayyads of Al-Andalus annexed the Balearic Islands.[134]
The Vikings' settlement in the British Isles led to the formation of new political entities, including the small but militant Kingdom of Dublin in Ireland.[135] In Anglo-Saxon England, King Alfred the Great (r. 871–899) came to an agreement with the Danish invaders in 879, acknowledging the existence of an independent Viking realm in Northumbria, East Anglia and eastern Mercia.[136][137] By the middle of the 10th century, Alfred's successors had restored English control over the territory.[138] In northern Britain, Kenneth MacAlpin (d. c. 860) united the Picts and the Scots into the Kingdom of Alba.[139] In the early 10th century, the Ottonian dynasty established itself in Germany, and was engaged in driving back the Magyars and fighting the disobedient dukes. After an appeal by the widowed Queen Adelaide of Italy (d. 999) for protection, the German king Otto I (r. 936–973) crossed the Alps into Italy, married the young widow and had himself crowned king in Pavia in 951. His coronation as Holy Roman Emperor in Rome in 962 demonstrated his claim to Charlemagne's legacy.[140] Otto's successors remained keenly interested in Italian affairs but the absent German kings were unable to assert permanent authority over the local aristocracy.[141] In the Iberian Peninsula, Asturias expanded slowly south in the 8th and 9th centuries, and continued as the Kingdom of León.[142]
The Eastern European trade routes towards Central Asia and the Near East were controlled by the Khazars; their multiethnic empire resisted the Muslim expansion, and their leaders converted to Judaism.[143] At the end of the 9th century, a new trade route developed, bypassing Khazar territory and connecting Central Asia with Europe across Volga Bulgaria; here the local inhabitants converted to Islam.[144] In Scandinavia, contacts with Francia paved the way for missionary efforts by Christian clergy, and Christianization was closely associated with the growth of centralised kingdoms in Denmark, Norway, and Sweden. Swedish traders and slave hunters ranged down the rivers of the East European Plain, captured Kyiv from the Khazars, and even attempted to seize Constantinople in 860 and 907.[145] Norse colonists settled in Iceland and created a political system that hindered the accumulation of power by ambitious chieftains.[146]
Byzantium revived its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959), members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration to all the provinces. The imperial court was the centre of a revival of classical learning, a process known as the Macedonian Renaissance. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r. 976–1025) to expand the frontiers of the empire.[147] Missionary efforts by both Eastern and Western clergy resulted in the conversion of the Moravians, Danubian Bulgars, Czechs, Poles, Magyars, and the inhabitants of the Kievan Rus'.[148] Moravia fell victim to Magyar invasions around 900, Bulgaria to Byzantine expansionism between 971 and 1018.[132][149] After the fall of Moravia, dukes of the Czech Přemyslid dynasty consolidated authority in Bohemia.[150] In Poland, the destruction of old power centres and construction of new strongholds accompanied the formation of state under the Piast dukes.[151] In Hungary, the princes of the Árpád dynasty applied extensive violence to crush opposition by rival Magyar chieftains.[152] The Rurikid princes of Kievan Rus' replaced the Khazars as the hegemon power of East Europe's vast forest zones after Rus' raiders sacked the Khazar capital Atil in 965.[153]
Under Constantine the Great and his successors, basilicas, large halls that had been used for administrative and commercial purposes, were adapted for Christian worship, and new basilicas were built in the major Roman cities and the post-Roman kingdoms.[K][156] In the late 6th century, Byzantine church architecture adopted an alternative model imitating the rectangular plan and the dome of Justinian's Hagia Sophia, the largest single roofed structure of the Roman world.[157] As the spacious basilicas became of little use with the decline of urban centres in the west, they gave way to smaller churches. By the beginning of the 8th century, the Carolingian Empire revived the basilica form of architecture.[158] One new standard feature of Carolingian basilicas is the use of a transept, or the "arms" of a T-shaped building that are perpendicular to the long nave.[159]
Magnificent halls built of timber or stone were the centres of political and social life all over the early Middle Ages. Their design often adopted elements of Late Roman architecture like pilasters, columns, and sculptured discs.[L][160] After the disintegration of the Carolingian Empire, the spread of aristocratic castles indicates a transition from communal fortifications to private defence in western Europe. Most castles were wooden structures but the wealthiest lords could afford the building of stone fortresses.[M] One or more towers, now known as keeps, were the most characteristic features of a medieval fortress. Castles often developed into multifunctional compounds with their drawbridges, fortified courtyards, cisterns or wells, halls, chapels, stables and workshops.[162]
Military and technology
During the later Roman Empire, the principal military developments were attempts to create an effective cavalry force as well as the continued development of highly specialised types of troops. The creation of heavily armoured cataphract-type soldiers as cavalry was an important feature of the Late Roman military. The various invading tribes had differing emphases on types of soldiers—ranging from the primarily infantry Anglo-Saxon invaders of Britain to the Vandals and Visigoths who had a high proportion of cavalry in their armies.[169] The greatest change in military affairs during the invasion period was the adoption of the Hunnic composite bow in place of the earlier, and weaker, Scythian composite bow.[170] The Avar heavy cavalry introduced the use of stirrups in Europe,[171] and it was adopted by Byzantine cavalrymen before the end of the 6th century.[172] Another development was the increasing use of longswords and the progressive replacement of scale armour by mail and lamellar armour.[173]
The importance of infantry and light cavalry began to decline during the early Carolingian period, with a growing dominance of elite heavy cavalry. Although much of the Carolingian armies were mounted, a large proportion during the early period appear to have been mounted infantry, rather than true cavalry.[174] The use of militia-type levies of the free population declined. One exception was Anglo-Saxon England, where the armies were still composed of regional levies, known as the fyrd.[175] In military technology, one of the main changes was the reappearance of the crossbow as a military weapon.[176] A technological advance that had implications beyond the military was the horseshoe, which allowed horses to be used in rocky terrain.[177]
The High Middle Ages was a period of tremendous population expansion. The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain unclear: improved agricultural techniques, assarting (or bringing new lands into production), a more clement climate and the lack of invasion have all been suggested.[179][180] Most medieval western thinkers divided the society into three fundamental classes. These were the clergy, the nobility, and the peasantry (or commoners).[181][182]Feudalism regulated fundamental social relations in many parts of Europe. In this system, one party granted property, typically land to the other in return for services, mostly of military nature that the recipient, or vassal, had to render to the grantor, or lord.[183][184] In Germany, inalienable allods remained the dominant forms of landholding. Their owners owed homage to a higher-ranking aristocrat or the king but their landholding was free of feudal obligations.[185]
As much as 90 percent of the European population remained rural peasants. Many were no longer settled in isolated farms but had gathered into more defensible small communities, usually known as manors or villages.[179][186] In the system of manorialism, a manor was the basic unit of landholding, and it comprised smaller components, such as parcels held by peasant tenants, and the lord's demesne.[187] Slaveholding was declining as churchmen prohibited the enslavement of coreligionists, but a new form of dependency (serfdom) supplanted it by the late 11th century. Unlike slaves, serfs had legal capacity, and their hereditary status was regulated by agreements with their lords. Restrictions on their activities varied but their freedom of movement was customarily limited, and they usually owed corvées, or labor services.[188][189] Peasants left their homelands in return for significant economic and legal privileges, typically a lower level of taxation and the right to administer justice at local level. The crossborder movement of masses of peasantry had radical demographic consequences, such as the spread of German settlements to the east, and the expansion of the Christian population in Iberia.[190]
With the development of heavy cavalry, the previously more or less uniform class of free warriors split into two groups. Those who could equip themselves as mounted knights were integrated into the traditional aristocracy, but others were assimilated into the peasantry.[191] The position of the new aristocracy was stabilized through the adoption of strict inheritance customs, such as primogeniture—the eldest son's right to inherit the family domains undivided.[192] Nobles were stratified in terms of the land and people over whom they held authority; the lowest-ranking nobles did not hold land and had to serve wealthier aristocrats.[N][194] Although constituting only about one percent of the population, the nobility was never a closed group: kings could raise commoners to the aristocracy, wealthy commoners could marry into noble families, and impoverished aristocrats sometimes gave up their privileged status.[195] The constant movement of Western aristocrats towards the peripheries of Latin Christendom was a featuring element of high medieval society. The French-speaking noblemen mainly settled in the British Isles, southern Italy or Iberia, whereas the German aristocrats preferred Central and Eastern Europe. Their migration was often supported by the local rulers who highly appreciated their military skills, but in many cases the newcomers were conquerors who established new lordships by force.[O][197]
The clergy was divided into two types: the secular clergy, who cared for believers' spiritual needs, mainly serving in the parish churches; and the regular clergy, who lived under a religious rule as monks, canons, or friars. Throughout the period clerics remained a very small proportion of the population, usually about one percent. Churchmen supervised several aspects of everyday life, church courts had exclusive jurisdiction over marriage affairs,[198] and church authorities supported popular peace movements.[199]
Women were officially required to be subordinate to some male, whether their father, husband, or other kinsman. Women's work generally consisted of household or other domestically inclined tasks such as child-care. Peasant women could supplement the household income by spinning or brewing at home, and they were also expected to help with field-work at harvest-time.[200] Townswomen could engage in trade but often only by right of their husband, and unlike their male competitors, they were not always allowed to train apprentices.[201] Noblewomen could inherit land in the absence of a male heir but their potential to give birth to children was regarded as their principal virtue.[202] The only role open to women in the Church was that of nuns, as they were unable to become priests.[203]
Trade and economy
The expansion of population, greater agricultural productivity and relative political stability laid the foundations for the medieval "Commercial Revolution" in the 11th century.[204] People with surplus cash began investing in commodities like salt, pepper and silk at faraway markets.[205] Rising trade brought new methods of dealing with money, and gold coinage was again minted in Europe, first in Italy and later in France. New forms of commercial contracts emerged, allowing risk to be shared within the framework of partnerships known as commenda or compagnia.[206]Bills of exchange also appeared, enabling easy transmission of money. As many types of coins were in circulation, money changers facilitated transactions between local and foreign merchants. Loans could also be negotiated with them which gave rise to the development of credit institutions called banks.[207] As new towns were developing from local commercial centres, the economic growth brought about a new wave of urbanisation. Kings and aristocrats mainly supported the process in the hope of increased tax revenues.[208] Most urban communities received privileges acknowledging their autonomy, but few cities could get rid of all elements of royal or aristocratic control.[209] Throughout the Middle Ages the population of the towns probably never exceeded 10 percent of the total population.[210]
The Italian maritime republics such as Amalfi, Venice, Genoa, and Pisa were the first to profit from the revival of commerce in the Mediterranean.[211] In the north, German merchants established associations known as hansas and took control of the trade routes connecting the British Islands and the Low Countries with Scandinavia and Eastern Europe.[212][P] Great trading fairs were established and flourished in northern France, allowing Italian and German merchants to trade with each other as well as local merchants.[214] In the late 13th century new land and sea routes to the Far East were pioneered, famously described in The Travels of Marco Polo written by one of the traders, Marco Polo (d. 1324).[215]
Economic growth provided opportunities to Jewish merchants to spread all over Europe. Although most kings, bishops and aristocrats appreciated the Jews' contribution to the local economy, many commoners regarded the non-Christian newcomers as an imminent threat to social cohesion.[216] As the Jews could not engage in prestigious trades outside their communities, they often took despised jobs such as ragmen or tax collectors.[217] They were especially active in moneylending for they could ignore the Christian clerics' condemnation on loan interest.[218] The Jewish moneylenders and pawn brokers reinforced antisemitism, which led to accusations of blasphemy, blood libels, and pogroms. Church authorities' growing concerns about Jewish influence on Christian life inspired segregationist laws,[Q] and even their permanent expulsion from England in 1290.[220]
Technology developed mainly through minor innovations and by the adoption of advanced technologies from Asia through Muslim mediation.[222] Major technological advances included the first mechanical clocks, the manufacture of distilled spirits, and the use of the astrolabe.[223] Convex spectacles were probably invented around 1286.[224]Windmills were first built in Europe in the 12th century,[223]spinning wheels appeared around 1200.[225]
The development of a three-field rotation system for planting crops[R] increased the usage of land by more than 30 percent, with a consequent increase in production.[226] The development of the heavy plough allowed heavier soils to be farmed more efficiently. The spread of horse collar led to the use of draught horses that required less pastures than oxen.[227] Legumes—such as peas, beans, or lentils—were grown more widely, in addition to the cereal crops.[228]
The construction of cathedrals and castles advanced building technology, leading to the development of large stone buildings. Ancillary structures included new town halls, hospitals, bridges, and tithe barns.[229] Shipbuilding improved with the use of the rib and plank method rather than the old Roman system of mortise and tenon. Other improvements to ships included the use of lateen sails and the stern-post rudder, both of which increased the speed at which ships could be sailed.[230]
In military affairs, the use of infantry with specialised roles increased. Along with the still-dominant heavy cavalry, armies often included mounted and infantry crossbowmen, as well as sappers and engineers.[231] Crossbows increased in use partly because of the increase in siege warfare.[176][S] This led to the use of closed-face helmets, heavy body armour, as well as horse armour during the 12th and 13th centuries.[233]Gunpowder was known in Europe by the mid-13th century.[234]
Lay investiture—the appointment of clerics by secular rulers—was condemned at an assembly of bishops in Rome in 1059, and the same synod established the exclusive right of the College of Cardinals to elect the popes.[238] Henry's son and successor Henry IV (r. 1056–1105) wanted to preserve the right to appoint his own choices as bishops within his lands but his appointments outraged Pope Gregory VII (pope 1073–85). Their quarrel developed into the Investiture Controversy, involving other powers as well because kings did not relinquish the control of appointments to bishoprics voluntarily. All conflicts ended with a compromise, in the case of the Holy Roman Emperors with the 1122 Concordat of Worms, mainly acknowledging the monarchs' claims.[239][240][T]
The High Middle Ages was a period of great religious movements.[242] Old pilgrimage sites such as Rome, Jerusalem, and Compostela received increasing numbers of visitors, and new sites such as Monte Gargano and Bari rose to prominence.[243] Popular movements emerged to support the implementation of the church reform but their anticlericalism sometimes led to the rejection of Catholic dogmas by the most radical groups such as the Waldensians and Cathars.[244][245] To suppress heresies, the popes appointed special commissioners of investigation known as inquisitors.[246] Monastic reforms continued as the Cluniac monasteries' splendid ceremonies were alien to those who preferred the simpler hermetical monasticism of early Christianity, or wanted to live the "Apostolic life" of poverty and preaching. New monastic orders were established, including the Carthusians and the Cistercians.[247] In the 13th century mendicant orders—the Franciscans and the Dominicans—who swore vows of poverty and earned their living by begging, were approved by the papacy.[248]
The High Middle Ages saw the development of institutions that would dominate political life in Europe until the late 18th century. Representative assemblies exerted influence on state administration through their control of taxation.[249] The concept of hereditary monarchy was strengthening in parallel with the development of laws governing the inheritance of land.[250] As female succession was recognised in most countries, the first reigning queens assumed power.[U][252]
In the Holy Roman Empire, the Ottonians were replaced by the Salians in 1024. They protected the lesser nobility to reduce the German dukes' power and seized Burgundy before clashing with the papacy under Henry IV.[256] After a short interval between 1125 and 1137, the Hohenstaufens succeeded the Salians. Their recurring conflicts with the papacy allowed the northern Italian cities and the German ecclesiastic and secular princes to extort considerable concessions from the emperors. In 1183, Frederick I Barbarossa (r. 1155–90) sanctioned the right of the Lombard cities to elect their leaders; the German princes' judicial and economic privileges were confirmed during the reign of his grandson Frederick II (r. 1220–50).[257] Frederick was famed for his erudition and unconventional life style[V] but his efforts to rule Italy eventually led to the fall of his dynasty. In Germany, a period of interregnum, or rather civil war began, whereas Sicily—Frederick's maternal inheritance—was seized by an ambitious French prince Charles I of Anjou (r. 1266–85).[259] During the German civil war, the right of seven prince-electors to elect the king was reaffirmed. Rudolf of Habsburg (r. 1273–91), the first king to be elected after the interregnum, realised that he was unable to control the whole empire. He granted Austria to his sons, thus establishing the basis for the Habsburgs' future dominance in Central Europe.[260][261]
Under the Capetian dynasty, the French monarchy slowly began to expand its authority over the nobility.[262] The French kings faced a powerful rival in the Dukes of Normandy, who in 1066 under William the Conqueror (r. 1035–87) conquered England and created a cross-Channel empire.[263][264] Under the Angevin dynasty of Henry II (r. 1154–89) and his son Richard I (r. 1189–99), the kings of England ruled over England and large areas of France. Richard's younger brother John (r. 1199–1216) lost the northern French possessions in 1204 to the French king Philip II Augustus (r. 1180–1223).[265] This led to dissension among the English nobility, while John's financial exactions to pay for his unsuccessful attempts to regain Normandy led in 1215 to Magna Carta, a charter that confirmed the rights and privileges of free men in England. Under Henry III (r. 1216–72), John's son, further concessions were made to the nobility, and royal power was diminished.[266] In France, Philip Augustus's son Louis VIII (r. 1223–26) distributed large portions of his father's conquests among his younger sons as appanages—virtually independent provinces—to facilitate their administration. On his death his widow Blanche of Castile (d. 1252) assumed the regency, and crushed a series of aristocratic revolts.[267] Their son Louis IX (r. 1226–70) improved local administration by appointing inspectors known as enquêteurs to oversee the royal officials' conduct. The royal court (or parliament) at Paris began hearing litigants in regular sessions almost all over the year.[268]
The Iberian Christian states, which had been confined to the northern part of the peninsula, began to push back against the Islamic states in the south, a period known as the Reconquista.[269] By about 1150, the Christian north had coalesced into the five major kingdoms of León, Castile, Aragon, Navarre, and Portugal.[270] Southern Iberia remained under the control of Islamic states, initially under the Caliphate of Córdoba, which broke up in 1031 into a shifting number of petty states known as taifas.[269] Although the Almoravids and the Almohads, two dynasties from the Maghreb, established centralised rule over Southern Iberia in the 1110s and 1170s respectively, their empires quickly disintegrated, allowing further expansion of the Christian kingdoms.[271] The Catholic Scandinavian states also expanded: the Norwegian kings assumed control of the Norse colonies in Iceland and Greenland, Denmark seized parts of Estonia, and the Swedes conquered Finland.[272]
With the rise of the Mongol Empire in the Eurasian steppes under Genghis Khan (r. 1206–27), a new expansionist power reached Europe's eastern borderlands.[278]Between 1236 and 1242, the Mongols conquered Volga Bulgaria, shattered the Rus' principalities, and laid waste to large regions in Poland, Hungary, Croatia, Serbia and Bulgaria. Their commander-in-chief Batu Khan (r. 1241–56)—a grandson of Genghis Khan—set up his capital at Sarai on the Volga, establishing the Golden Horde, a Mongol state nominally under the distant Great Khan's authority. The Mongols extracted heavy tribute from the Rus' principalities, and the Rus' princes had to ingratiate themselves with the Mongol khans for economic and political concessions.[W] The Mongol conquest was followed by a peaceful period in Eastern Europe which facilitated the development of direct trade contacts between Europe and China through newly established Genoese colonies in the Black Sea region.[280]
Clashes with secular powers during the Investiture Controversy accelerated the militarization of the papacy. Pope Urban II (pope 1088–99) proclaimed the First Crusade at the Council of Clermont, declaring the liberation of Jerusalem as its ultimate goal, and offering indulgence—the remission of sins—to all who took part.[281] Tens of thousands of fanatics, mainly common people, formed loosely organised bands, lived off looting, and attacked the Jewish communities as they were marching to the east. Antisemitic pogroms were especially violent in the Rhineland. Few of the first crusaders reached Asia Minor, and those who succeeded were annihilated by the Turks.[282][283] The official crusade departed in 1096 under the command of prominent aristocrats like Godfrey of Bouillon (d. 1100), and Raymond of Saint-Gilles (d. 1105). They defeated the Turks in two major battles at Dorylaeum and Antioch, allowing the Byzantines to recover western Asia Minor. The westerners consolidated their conquests into crusader states in northern Syria and Palestine, but their security depended on external military assistance which led to further crusades.[284] Muslim resistance was raised by ambitious warlords, like Saladin (d. 1193) who captured Jerusalem in 1187.[285] New crusades prolonged the crusader states' existence for another century, until the crusaders' last strongholds fell to the Mamluks of Egypt in 1291.[286]
With its specific ceremonies and institutions, the crusading movement became a featuring element of medieval life.[Y] Often extraordinary taxes were levied to finance the crusades, and from 1213 a crusader oath could be fulfilled through a cash payment which gave rise to the sale of plenary indulgences by Church authorities.[294] The crusades brought about the fusion of monastic life with military service within the framework of a new type of monastic order, the military orders. The establishment of the Knights Templar set the precedent, inspiring the militarization of charitable associations, like the Hospitallers and the Teutonic Knights, and the founding of new orders of warrior monks, like the Order of Calatrava and the Livonian Brothers of the Sword.[295][296] Although established in the crusader states, the Teutonic Order focused much of its activity in the Baltic where they founded their own state in 1226.[297]
The discovery of a copy of the Corpus Juris Civilis in the 11th century paved the way for the systematic study of Roman law at Bologna. This led to the recording and standardisation of legal codes throughout Western Europe. Around 1140, the monk Gratian (fl. 12th century), a teacher at Bologna, wrote what became the standard text of ecclesiastical law, or canon law—the Decretum Gratiani.[308] Among the results of the Greek and Islamic influence on this period in European history was the replacement of Roman numerals with the decimalpositional number system and the invention of algebra, which allowed more advanced mathematics. Astronomy benefited from the translation of Ptolemy's Almagest from Greek into Latin in the late 12th century. Medicine was also studied, especially in southern Italy, where Islamic medicine influenced the school at Salerno.[309]
Architecture, art, and music
In the 10th century the establishment of churches and monasteries led to the development of stone architecture that elaborated vernacular Roman forms, from which the term "Romanesque" is derived. Where available, Roman brick and stone buildings were recycled for their materials. From the tentative beginnings known as the First Romanesque, the style flourished and spread across Europe in a remarkably homogeneous form. Just before 1000 there was a great wave of building stone churches all over Europe.[310][failed verification]Romanesque buildings have massive stone walls, openings topped by semi-circular arches, small windows, and, particularly in France, arched stone vaults.[311] The large portal with coloured sculpture in high relief became a central feature of façades, especially in France, and the capitals of columns were often carved with narrative scenes of imaginative monsters and animals.[312][failed verification] According to art historian C. R. Dodwell, "virtually all the churches in the West were decorated with wall-paintings", of which few survive.[313] Simultaneous with the development in church architecture, the distinctive European form of the castle was developed and became crucial to politics and warfare.[314]
During this period the practice of manuscript illumination gradually passed from monasteries to lay workshops, so that according to Janetta Benton "by 1300 most monks bought their books in shops",[319] and the book of hours developed as a form of devotional book for lay-people. Metalwork continued to be the most prestigious form of art, with Limoges enamel a popular and relatively affordable option.[320] In Italy the innovations of Cimabue and Duccio, followed by the Trecento master Giotto (d. 1337), greatly increased the sophistication and status of panel painting and fresco.[321] Increasing prosperity during the 12th century resulted in greater production of secular art; many carved ivory objects such as gaming pieces, combs, and small religious figures have survived.[322]
Late Middle Ages
Famine and plague
Average annual temperature was declining from around 1200, introducing the gradual transition from the Medieval Warm Period to the Little Ice Age. Climate anomalies caused agricultural crises and famine, culminating in the Great Famine of 1315–17.[323] As the starving peasants slaughtered their draft animals, those who survived had to make extraordinary efforts to revive farming. The previously profitable monoculture aggravated the situation in many regions, as unseasonable weather could completely ruin a harvest season.[324]
Execution of some of the ringleaders of the Jacquerie, from a 14th-century manuscript of the Chroniques de France ou de St Denis
These troubles were followed in 1347 by the Black Death, a pandemic that spread throughout Europe during the following three years, killing about one-third of the population. Towns were especially hard-hit because of their crowded conditions.[AA] The rapid and extremely high mortality destroyed economy and trade, and the recovery was slow. The peasants who survived the pandemic paid lower rents to the landlords but demand for agricultural products declined, and the lower prices barely covered their costs. Urban workers received higher salaries but they were heavily taxed. Occasionally, the governments tried to fix rural rents at a high level, or to keep urban salaries low, which provoked popular uprisings across Europe, including the Jacquerie in France, the Peasants' Revolt in England, and the Ciompi Revolt in Florence.[326] The trauma of the plague led to an increased piety throughout Europe, manifested by the foundation of new charities, the self-mortification of the flagellants, and the scapegoating of Jews.[327] Plague continued to strike Europe periodically during the rest of the Middle Ages.[325]
Society and economy
Society throughout Europe was disturbed by the dislocations caused by the Black Death. Lands that had been marginally productive were abandoned, as the survivors were able to acquire more fertile areas.[328] Although serfdom declined in Western Europe it became more common in Eastern Europe, as landlords imposed it on tenants who had previously been free.[329] Most peasants in Western Europe managed to change the work they had previously owed to their landlords into cash rents.[330] The percentage of serfs amongst the peasantry declined from a high of 90 to closer to 50 percent by the end of the period.[193][failed verification] Landlords also became more conscious of common interests with other landholders, and they joined to extort privileges from their governments. Partly at the urging of landlords, governments attempted to legislate a return to the economic conditions that existed before the Black Death.[330] Non-clergy became increasingly literate, and urban populations began to imitate the nobility's interest in chivalry.[331]
Jewish communities were expelled from England in 1290 and from France in 1306. Many emigrated eastwards, settling in Poland and Hungary.[332] The Jews were expelled from Spain in 1492 and dispersed to Turkey, France, Italy, and Holland.[333] The rise of banking in Italy during the 13th century continued throughout the 14th century, fuelled partly by the increasing warfare of the period and the needs of the papacy to move money between kingdoms. Many banking firms loaned money to royalty, at great risk, as some were bankrupted when kings defaulted on their loans.[334][AB]
State resurgence
Strong, royalty-based nation states rose throughout Europe in the Late Middle Ages, particularly in England, France, and the Christian kingdoms of the Iberian Peninsula: Aragon, Castile, and Portugal. The long conflicts of the period strengthened royal control over their kingdoms and were extremely hard on the peasantry. Kings profited from warfare that extended royal legislation and increased the lands they directly controlled.[335] Paying for the wars required that methods of taxation become more effective and efficient, and the rate of taxation often increased.[336] The requirement to obtain the consent of taxpayers allowed representative bodies such as the English Parliament and the French Estates General to gain power and authority.[337]
Throughout the 14th century, French kings sought to expand their influence at the expense of the territorial holdings of the nobility.[338] They ran into difficulties when attempting to confiscate the holdings of the English kings in southern France, leading to the Hundred Years' War,[339] waged from 1337 to 1453.[340] Early in the war, the English won the battles of Crécy and Poitiers, captured the city of Calais, and won control of much of France.[AC] The resulting stresses almost caused the disintegration of the French kingdom.[342] In the early 15th century, France again came close to dissolving, after Henry V's victory at the Battle of Agincourt in 1415, which briefly paved the way for a unification of the two kingdoms. However, his son Henry VI soon squandered all previous gains,[343] and in the late 1420s, the military successes of Joan of Arc (d. 1431) led to the victory of the French and the capture of the last English possessions in southern France in 1453.[344] The price was high, with the population of France at the end of the wars likely half what it had been at the start. Conversely, the Wars had a positive effect on English national identity, doing much to fuse the various local identities into a national English ideal. The conflict with France also helped create a national culture in England separate from French culture, which had previously been the dominant influence.[345] The dominance of the English longbow began during early stages of the Hundred Years' War,[346] and cannon appeared on the battlefield at Crécy in 1346.[347]
In modern-day Germany, the Holy Roman Empire continued to rule, but the elective nature of the imperial crown meant there was no enduring dynasty around which a strong state could form.[348] Further east, the kingdoms of Poland, Hungary, and Bohemia grew powerful.[349] In Iberia, the Christian kingdoms continued to gain land from the Muslim kingdoms of the peninsula;[350] Portugal concentrated on expanding overseas during the 15th century, while the other kingdoms were riven by difficulties over royal succession and other concerns.[351][352] After losing the Hundred Years' War, England went on to suffer a long civil war known as the Wars of the Roses, which lasted into the 1490s[352] and only ended when Henry Tudor (r. 1485–1509 as Henry VII) became king and consolidated power with his victory over Richard III (r. 1483–85) at Bosworth in 1485.[353] In Scandinavia, Margaret I of Denmark (r. in Denmark 1387–1412) consolidated Norway, Denmark, and Sweden in the Union of Kalmar, which continued until 1523. The major power around the Baltic Sea was the Hanseatic League, a commercial confederation of city-states that traded from Western Europe to Russia.[354] Scotland emerged from English domination under Robert the Bruce (r. 1306–29), who secured papal recognition of his kingship in 1328.[355]
Although the Palaiologos emperors recaptured Constantinople from the Western Europeans in 1261, they were never able to regain control of much of the former imperial lands. The former Byzantine lands in the Balkans were divided between the new Kingdom of Serbia, the Second Bulgarian Empire and the city-state of Venice. The power of the Byzantine emperors was threatened by a new Turkish tribe, the Ottomans, who established themselves in Anatolia in the 13th century and steadily expanded throughout the 14th century. The Ottomans expanded into Europe, reducing Bulgaria to a vassal state by 1366 and taking over Serbia after its defeat at the Battle of Kosovo in 1389. Western Europeans rallied to the plight of the Christians in the Balkans and declared a new crusade in 1396; a great army was sent to the Balkans, where it was defeated at the Battle of Nicopolis.[356] Constantinople was finally captured by the Ottomans in 1453.[357]
Controversy within the Church
During the tumultuous 14th century, disputes within the leadership of the Church led to the Avignon Papacy of 1309–76,[358] also called the "Babylonian Captivity of the Papacy" (a reference to the Babylonian captivity of the Jews),[359] and then to the Great Schism, lasting from 1378 to 1418, when there were two and later three rival popes, each supported by several states.[360] Ecclesiastical officials convened at the Council of Constance in 1414, and in the following year, the council deposed one of the rival popes, leaving only two claimants. Further depositions followed, and in November 1417, the council elected Martin V (pope 1417–31) as pope.[361]
Besides the schism, the Western Church was riven by theological controversies, some of which turned into heresies. John Wycliffe (d. 1384), an English theologian, was condemned as a heretic in 1415 for teaching that the laity should have access to the text of the Bible as well as for holding views on the Eucharist that were contrary to Church doctrine.[362] Wycliffe's teachings influenced two of the major heretical movements of the later Middle Ages: Lollardy in England and Hussitism in Bohemia.[363] The Bohemian movement initiated with the teaching of Jan Hus, who was burned at the stake in 1415. The Hussite Church, although the target of a crusade, survived beyond the Middle Ages.[364] Other heresies were manufactured, such as the accusations against the Knights Templar that resulted in their suppression in 1312 and the division of their great wealth between the French king Philip IV (r. 1285–1314) and the Hospitallers.[365]
The papacy further refined the practice in the Mass in the Late Middle Ages, holding that the clergy alone was allowed to partake of the wine in the Eucharist. This further distanced the secular laity from the clergy. The laity continued the practices of pilgrimages, veneration of relics, and belief in the power of the devil. Mystics such as Meister Eckhart (d. 1327) and Thomas à Kempis (d. 1471) wrote works that taught the laity to focus on their inner spiritual life, which laid the groundwork for the Protestant Reformation. Besides mysticism, belief in witches and witchcraft became widespread, and by the late 15th century, the Church had begun to lend credence to populist fears of witchcraft.[366]
Scholars, intellectuals, and exploration
During the Later Middle Ages, theologians such as John Duns Scotus (d. 1308) and William of Ockham (d. c. 1348)[367] led a reaction against intellectualist scholasticism, objecting to the application of reason to faith. Their efforts undermined the prevailing Platonic idea of universals. Ockham's insistence that reason operates independently of faith allowed science to be separated from theology and philosophy.[368] Legal studies were marked by the steady advance of Roman law into areas of jurisprudence previously governed by customary law. The lone exception to this trend was in England, where the common law remained pre-eminent. Other countries codified their laws; legal codes were promulgated in Castile, Poland, and Lithuania.[369]
Education remained mostly focused on the training of future clergy. The basic learning of the letters and numbers remained the province of the family or a village priest, but the secondary subjects of the trivium—grammar, rhetoric, logic—were studied in cathedral schools or in schools provided by cities. Universities spread throughout Europe in the 14th and 15th centuries. Lay literacy rates rose, but were still low; one estimate gave a literacy rate of ten percent of males and one percent of females in 1500.[370]
The publication of vernacular literature increased, with Dante (d. 1321), Petrarch and Boccaccio in 14th-century Italy, Geoffrey Chaucer (d. 1400) and William Langland (d. c. 1386) in England, and François Villon (d. 1464) and Christine de Pizan (d. c. 1430) in France. Much literature remained religious in character, and although a great deal of it continued to be written in Latin, a new demand developed for saints' lives and other devotional tracts in the vernacular languages.[369] This was fed by the growth of the Devotio Moderna movement, most prominently in the formation of the Brethren of the Common Life.[371] Theatre also developed in the guise of miracle plays put on by the Church.[369] At the end of the period, the development of the printing press in about 1450 led to the establishment of publishing houses throughout Europe by 1500.[372]
In the early 15th century, the countries of the Iberian Peninsula began to sponsor exploration beyond the boundaries of Europe. Prince Henry the Navigator of Portugal (d. 1460) sent expeditions that discovered the Canary Islands, the Azores, and Cape Verde during his lifetime. After his death, exploration continued; Bartolomeu Dias (d. 1500) went around the Cape of Good Hope in 1486, and Vasco da Gama (d. 1524) sailed around Africa to India in 1498.[373] The combined Spanish monarchies of Castile and Aragon sponsored the voyage of exploration by Christopher Columbus (d. 1506) in 1492 that led to his discovery of the Americas.[374] The English crown under Henry VII sponsored the voyage of John Cabot (d. 1498) in 1497, which landed on Cape Breton Island.[375]
Technological and military developments
One of the major developments in the military sphere during the Late Middle Ages was the increased use of infantry and light cavalry.[376] The English also employed longbowmen, but other countries were unable to create similar forces with the same success.[377] Armour continued to advance, spurred by the increasing power of crossbows, and plate armour was developed to protect soldiers from crossbows as well as the handheld guns that were developed.[378]Pole arms reached new prominence with the development of the Flemish and Swiss infantry armed with pikes and other long spears.[379]
In agriculture, the increased usage of sheep with long-fibred wool allowed a stronger thread to be spun. In addition, the spinning wheel replaced the traditional distaff, tripling production.[225][AD] A less technological refinement that still greatly affected daily life was the use of buttons as closures for garments.[381] Windmills were refined with the creation of the tower mill, allowing the upper part of the windmill to be spun around to face the direction from which the wind was blowing.[382] The blast furnace appeared around 1350 in Sweden, increasing the quantity of iron produced and improving its quality.[383] The first patent law in 1447 in Venice protected the rights of inventors to their inventions.[384]
Late medieval art and architecture
The Late Middle Ages in Europe as a whole correspond to the Trecento and Early Renaissance cultural periods in Italy. Northern Europe and Spain continued to use Gothic styles, which became increasingly elaborate in the 15th century, until almost the end of the period. International Gothic was a courtly style that reached much of Europe in the decades around 1400, producing masterpieces such as the Très Riches Heures du Duc de Berry.[385] All over Europe, secular art continued to increase in quantity and quality, and in the 15th century, the mercantile classes of Italy and Flanders became important patrons, commissioning small portraits as well as a growing range of luxury items such as jewellery, ivory caskets, cassone chests, and maiolica pottery. Although royalty owned huge collections of plate, little survives except for the Royal Gold Cup.[386] Italian silk manufacture developed, so that Western churches and elites no longer needed to rely on imports from Byzantium or the Islamic world. In France and Flanders tapestry weaving of sets like The Lady and the Unicorn became a major luxury industry.[387]
The large external sculptural schemes of Early Gothic churches gave way to more sculpture inside the building, as tombs became more elaborate and other features such as pulpits were sometimes lavishly carved, as in the Pulpit by Giovanni Pisano in Sant'Andrea. Painted or carved wooden relief altarpieces became common, especially as churches created many side-chapels. Early Netherlandish paintings by artists such as Jan van Eyck (d. 1441) and Rogier van der Weyden (d. 1464) rivalled that of Italy, as did northern illuminated manuscripts, which in the 15th century began to be collected on a large scale by secular elites, who also commissioned secular books, especially histories. From about 1450, printed books rapidly became popular, though still expensive. There were around 30,000 different editions of incunabula, or works printed before 1500,[388] by which time illuminated manuscripts were commissioned only by royalty and a few others. Very small woodcuts, nearly all religious, were affordable even by peasants in parts of Northern Europe from the middle of the 15th century. More expensive engravings supplied a wealthier market with a variety of images.[389]
Modern perceptions
The medieval period is frequently caricatured as a "time of ignorance and superstition" that placed "the word of religious authorities over personal experience and rational activity."[390] This is a legacy from both the Renaissance and Enlightenment when scholars favourably contrasted their intellectual cultures with those of the medieval period. Renaissance scholars saw the Middle Ages as a period of decline from the high culture and civilisation of the Classical world. Enlightenment scholars saw reason as superior to faith, and thus viewed the Middle Ages as a time of ignorance and superstition.[13]
Others argue that reason was generally held in high regard during the Middle Ages. Science historian Edward Grant writes, "If revolutionary rational thoughts were expressed [in the 18th century], they were only made possible because of the long medieval tradition that established the use of reason as one of the most important of human activities".[391] Also, contrary to common belief, David Lindberg writes, "The late medieval scholar rarely experienced the coercive power of the Church and would have regarded himself as free (particularly in the natural sciences) to follow reason and observation wherever they led."[392]
The caricature of the period is also reflected in some more specific notions. One misconception, first propagated in the 19th century,[393] is that all people in the Middle Ages believed that the Earth was flat.[393] This is untrue, as lecturers in the medieval universities commonly argued that evidence showed the Earth was a sphere.[394] Other misconceptions such as "the Church prohibited autopsies and dissections during the Middle Ages", "the rise of Christianity killed off ancient science", or "the medieval Christian Church suppressed the growth of natural philosophy", are all cited by Numbers as examples of widely popular myths that still pass as historical truth, although they are not supported by historical research.[395][dubious – discuss]
Notes
^The commanders of the Roman military in the area appear to have taken food and other supplies intended to be given to the Goths and instead sold them to the Goths. The revolt was triggered when one of the Roman military commanders attempted to take the Gothic leaders hostage but failed to secure all of them.[31]
^An alternative date of 480 is sometimes given, as that was the year Romulus Augustulus' predecessor Julius Nepos died; Nepos had continued to assert that he was the Western emperor while holding onto Dalmatia.[40]
^Childeric's grave was discovered at Tournai in 1653 and is remarkable for its grave goods, which included weapons and a large quantity of gold.[50]
^Among the powerful women, the Arian queen Goiswintha (d. 589) was a vehement but unsuccessful opponent of the Visigoth's conversion to Catholicism, and the Frankish queen Brunhilda of Austrasia (d. 613) was torn to pieces by horses at the age of 70.[68]
^Limited evidence from early medieval cemeteries indicates that the sex ratio at death was 120–130 men to 100 women in parts of Europe.[70]
^In France, Germany, and the Low Countries there was a further type of "noble", the ministerialis, who were in effect unfree knights. They descended from serfs who had served as warriors or government officials, which increased status allowed their descendants to hold fiefs as well as become knights while still being technically serfs.[193]
^These two groups—Germans and Italians—took different approaches to their trading arrangements. Most German cities co-operated when dealing with the northern rulers; in contrast with the Italian city-states who engaged in internecine strife. For instance, conflicts between Italian, Catalan and Provençal merchant communities culminated in the War of Saint Sabas in the Levant in 1257.[213]
^It had spread in Northern Europe by 1000, and had reached Poland by the 12th century.[226]
^Crossbows are slow to reload, which limits their use on open battlefields. In sieges the slowness is not as big a disadvantage, as the crossbowman can hide behind fortifications while reloading.[232]
^Most compromise was based on a distinction between a prelate's spiritual and temporal responsibilities. This allowed the bishops and abbots to swear an oath of fealty to the emperor or king in return for their investment in the possessions of bishoprics and abbeys without formally sanctioning the monarch's claim to control their election.[241]
^Frederick II had a harem, was dressed in Arab-style garments, and wore a mantle decorated with verses from the Quran during his imperial coronation in Rome.[258]
^For example, Prince Alexander Nevsky (d. 1263) made four visits at Sarai to gain the Khans' favor. He overcame his rivals with Mongol assistance, crushed an anti-Mongol riot in Novgorod, and received a grant of tax exemption for the Orthodox Church.[279]
^After the fall of Constantinople to the crusaders, three Byzantine successor states emerged: Epirus in northern Greece and Albania, Nicaea in western Asia Minor, and Trebizond in northeastern Asia Minor. Michael VIII had ruled Nicaea before seizing Constantinople.[290]
^Those who decided to participate in a crusade took an oath and placed the mark of the cross on their cloths. The crusaders enjoyed privileges, including a moratorium on debts, but those who failed to fulfil the crusader oath faced infamy or excommunication.[293]
Lightbown, Ronald W. (1978). Secular Goldsmiths' Work in Medieval France: A History. Reports of the Research Committee of the Society of Antiquaries of London. London: Thames and Hudson. ISBN0-500-99027-1.
Vale, Malcolm (1998). "The Civilization of Courts and Cities in the North, 1200–1500". In Holmes, George (ed.). The Oxford Illustrated History of Medieval Europe. Oxford, UK: Oxford University Press. pp. 297–351. ISBN0-19-285220-5.
Whitton, David (1998). "The Society of Northern Europe in the High Middle Ages, 900–1200". In Holmes, George (ed.). The Oxford Illustrated History of Medieval Europe. Oxford, UK: Oxford University Press. pp. 115–174. ISBN0-19-285220-5. | In their concept, their age had begun when Christ had brought light to mankind, contrasted with the spiritual darkness of previous periods. The Italian humanist and poet Petrarch (d. 1374) turned the metaphor upside down, stating that the age of darkness had begun when emperors of non-Italian origin assumed power in the Roman Empire.[10]
The most commonly given starting point for the Middle Ages is around 500,[11] with 476—the year the last Western Roman Emperor was deposed—first used by Bruni.[7] For Europe as a whole, 1500 is often considered to be the end of the Middle Ages,[12] but there is no universally agreed-upon end date. Depending on the context, events such as the conquest of Constantinople by the Turks in 1453, Christopher Columbus's first voyage to the Americas in 1492, or the Protestant Reformation in 1517 are sometimes used.[13] English historians often use the Battle of Bosworth Field in 1485 to mark the end of the period.[14]
Historians from Romance language-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low" period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into three intervals: "Early", "High", and "Late".[2] | yes |
Archaeology | Were there Dark Ages in the Middle Ages? | no_statement | there were no "dark" ages in the middle ages.. the middle ages did not have a period referred to as the "dark" ages. | https://www.medievalists.net/2023/06/middle-ages-dark-ages/ | Why the Middle Ages are called the 'Dark Ages' - Medievalists.net | Why the Middle Ages are called the ‘Dark Ages’
The Dark Ages – it is a term that evokes images of war, destruction and death. How did the term ‘Dark Ages’ become synonymous with the Middle Ages, and why do we still refer to it like that?
History is full of people talking about how they are living in a ‘dark time’ or in an ‘age of light’ – it is an easy metaphor to explain that you are living in good or bad times. It was a metaphor used by the 14th-century Italian poet Petrarch, who was a great admirer of the ancient Romans and Greeks. He would compare those times with his own and found that he wasn’t very happy with the present-day situation. In one of his works he writes,
Advertisement
My fate is to live among varied and confusing storms. But for you perhaps, if as I hope and wish you will live long after me, there will follow a better age. This sleep of forgetfulness will not last for ever. When the darkness has been dispersed, our descendants can come again in the former pure radiance.
Petrarch’s views would be taken up by other Italian scholars – by the late 14th and 15th centuries they were having an intellectual and artistic flowering, and began seeing themselves as following in the footsteps of the ancients. Janet Nelson explains that, at least in their minds, they were “confidently believing theirs was a time of reborn classical culture, they rescued Greek from near-oblivion, removed errors from Latin, cleared fog from philosophy, crassness from theology, crudeness from art.”
Advertisement
These writers began to see history as divided into three phases – there was the Classical Age, a time of Greek wisdom, Roman power, and when Jesus Christ walked in this world; and their own time, a Renaissance when things were getting better. Meanwhile, there were all those centuries in between – from the fall of the Roman Empire in the fifth century to just before their own time. After a little while it got the name ‘Middle Ages’ (or Medium Aevum in Latin). The Italian writers viewed it as an era when everything was in decline, when the great buildings of Rome like the Colosseum were slowly crumbling and when no one was producing great works of literature.
The idea of the ‘Middle Ages’ would spread to other historians around Europe. However, the term ‘Dark Ages’ is something usually found in just English writing. By the seventeenth and eighteenth centuries you have writers like Edward Gibbon referring to this time as “the darkness of the middle ages” and portraying life during this time as full of either uncultured barbarians, evil tyrants or superstitious peasants. By the nineteenth century the Dark Ages and the Middle Ages meant the same thing.
Since then historians have become more positive about the medieval period and its achievements – and the idea that people were living in the Dark Ages is getting used less and less, at least in academic circles.
Some English historians will still say if there is any kind of ‘Dark Ages’ in medieval history, it is during the earliest part of the Middle Ages, right after the fall of Roman power in Britain around the fifth and sixth centuries. This is mostly because it is a period that has few surviving written sources, so historians have largely been left in the dark about what happened at that time.
Advertisement
While medievalists might roll their eyes when they hear the term the ‘Dark Ages’, this idea is probably going to survive in the public’s mind for a while longer. However, we should be glad that the other names given to the Middle Ages – including the Barbarous Ages, the Obscure Ages, the Leaden Ages, the Monkish Ages and the Muddy Ages – did not get as popular!
Advertisement
Our Online Courses
Advertisement
Read Next
Advertisement
We've created a Patreon for Medievalists.net as we want to transition to a more community-funded model.
We aim to be the leading content provider about all things medieval. Our website, podcast and Youtube page offers news and resources about the Middle Ages. We hope that are our audience wants to support us so that we can further develop our podcast, hire more writers, build more content, and remove the advertising on our platforms. This will also allow our fans to get more involved in what content we do produce. | The idea of the ‘Middle Ages’ would spread to other historians around Europe. However, the term ‘Dark Ages’ is something usually found in just English writing. By the seventeenth and eighteenth centuries you have writers like Edward Gibbon referring to this time as “the darkness of the middle ages” and portraying life during this time as full of either uncultured barbarians, evil tyrants or superstitious peasants. By the nineteenth century the Dark Ages and the Middle Ages meant the same thing.
Since then historians have become more positive about the medieval period and its achievements – and the idea that people were living in the Dark Ages is getting used less and less, at least in academic circles.
Some English historians will still say if there is any kind of ‘Dark Ages’ in medieval history, it is during the earliest part of the Middle Ages, right after the fall of Roman power in Britain around the fifth and sixth centuries. This is mostly because it is a period that has few surviving written sources, so historians have largely been left in the dark about what happened at that time.
Advertisement
While medievalists might roll their eyes when they hear the term the ‘Dark Ages’, this idea is probably going to survive in the public’s mind for a while longer. However, we should be glad that the other names given to the Middle Ages – including the Barbarous Ages, the Obscure Ages, the Leaden Ages, the Monkish Ages and the Muddy Ages – did not get as popular!
Advertisement
Our Online Courses
Advertisement
Read Next
Advertisement
We've created a Patreon for Medievalists.net as we want to transition to a more community-funded model.
We aim to be the leading content provider about all things medieval. Our website, podcast and Youtube page offers news and resources about the Middle Ages. We hope that are our audience wants to support us so that we can further develop our podcast, hire more writers, build more content, and remove the advertising on our platforms. | yes |
Archaeology | Were there Dark Ages in the Middle Ages? | no_statement | there were no "dark" ages in the middle ages.. the middle ages did not have a period referred to as the "dark" ages. | https://www.medicalnewstoday.com/articles/323533 | Medieval and Renaissance medicine: Practice and developments | What was medieval and Renaissance medicine?
The Medieval Period lasted from around 476 C.E. to 1453 C.E. The Renaissance and the Age of Discovery came after. Medieval medicine typically refers to a combination of natural and supernatural methods.
In southern Spain, North Africa, and the Middle East, Islamic scholars were translating Greek and Roman medical records and literature.
In Europe, however, scientific advances were limited.
Read on to find out more about medicine in the Middle Ages and the Renaissance.
Share on PinterestIn the Middle Ages, the local apothecary or wise woman would provide herbs and potions.
The Early Middle Ages, or Dark Ages, started when invasions broke up Western Europe into small territories run by feudal lords.
Most people lived in rural servitude. Even by 1350, the average life expectancy was 30–35 years, and 1 in 5 children died at birth.
There were no services for public health or education at this time, and communication was poor. Scientific theories had little chance to develop or spread.
People were also superstitious. They did not read or write, and there was no schooling.
Only in the monasteries was there a chance for learning and science to continue. Often, monks were the only people who could read and write.
Around 1066 C.E., things began to change.
The Universities of Oxford and Paris were established. Monarchs became owners of more territory, their wealth grew, and their courts became centers of culture. Learning started to take root. Trade grew rapidly after 1100 C. E., and towns formed.
However, with them came new public health problems.
Medieval medical practice
Across Europe, the quality of medical practitioners was poor, and people rarely saw a doctor, although they might visit a local wise woman, or witch, who would provide herbs or incantations. Midwives, too, helped with childbirth.
The Church was an important institution, and people started to mix or replace their spells and incantations with prayers and requests to saints, together with herbal remedies.
In the hope that repentance for sins might help, people practiced penance and went on pilgrimages, for example, to touch the relics of a saint, as a way of finding a cure.
Some monks, such as the Benedictines, cared for the sick and devoted their lives to that. Others felt that medicine was not in keeping with faith.
During the Crusades, many people traveled to the Middle East and learnt about scientific medicine from Arabic texts. These explained discoveries that Islamic doctors and scholars had made, based on Greek and Roman theories.
In the Islamic World, Avicenna was writing “The Canon of Medicine.” This included details on Greek, Indian, and Muslim medicine. Scholars translated it and, in time, it became essential reading throughout Western European centers of learning. It remained an important text for several centuries.
Other major texts that were translated explained the theories of Hippocrates and Galen.
The theory of humors
The ancient Egyptians developed the theory of humorism, Greek scholars and physicians reviewed it, and then Roman, medieval Islamic, and European doctors adopted it.
Each humor was linked to a season, an organ, a temper, and an element.
Humor
Organ
Temper
Season
Element
Black bile
Spleen
Melancholy
Cold and dry
Earth
Yellow bile
Lungs
Phlegmatic
Cold and wet
Water
Phlegm
The head
Sanguine
Warm and wet
Air
Blood
Gallbladder
Choleric
Warm and dry
Fire
The theory held that four different bodily fluids — humors — influenced human health. They had to be in perfect balance, or a person would become sick, either physically or in terms of personality.
An imbalance could result from inhaling or absorbing vapors. Medical establishments believed that levels of these humors would fluctuate in the body, depending on what people ate, drank, inhaled, and what they had been doing.
Lung problems, for example, happened when there was too much phlegm in the body. The body’s natural reaction was to cough it up.
To restore the right balance, a doctor would recommend:
blood-letting, using leeches
consuming a special diet and medicines
The theory lasted for 2,000 years, until scientists discredited it.
Medication
Herbs were very important, and monasteries had extensive herb gardens to produce herbs to resolve each imbalance humor. The local apothecary or witch, too, might provide herbs.
The Christian Doctrine of Signature said that God would provide some kind of relief for every disease, and that each substance had a signature which indicated how effective it might be.
For this reason, they used seeds that looked like miniature skulls, such as the skullcap, to treat headache, for example.
The most famous medieval book on herbs is probably the “Red Book of Hergest,” which was written in Welsh around 1390 C.E.
Hospitals
Hospitals during the Middle Ages were more like the hospices of today, or homes for the aged and needy.
They housed people who were sick, poor, and blind, as well as pilgrims, travelers, orphans, people with mental illness, and individuals who had nowhere else to go.
Christian teaching held that people should provide hospitality for those in desperate need, including food, shelter, and medical care if necessary.
During the Early Middle Ages, people did not use hospitals much for treating sick people, unless they had particular spiritual needs or nowhere to live.
Monasteries throughout Europe had several hospitals. These provided medical care and spiritual guidance, for example, the Hotel-Dieu, founded in Lyons in 542 C. E. and the Hotel-Dieu of Paris, founded in 652 C. E.
The Saxons built the first hospital in England in 937 C. E, and many more followed after the Norman Conquest in 1066, including St. Bartholomew’s of London, built in 1123 C.E., which remains a major hospital today.
A hospitium was a hospital or hospice for pilgrims. In time, the hospitium developed and became more like today’s hospitals, with monks providing the expert medical care and lay people helping them.
In time, public health needs, such as wars and the plagues of the 14th century, led to more hospitals.
Surgery
Share on PinterestMedieval barber-surgeons used special tools to remove arrowheads on the battlefield.
One area in which doctors made advances was in surgery.
Barber-surgeons carried out surgery. Their skill was important on the battlefield, where they also learnt useful skills tending to wounded soldiers.
Tasks included removing arrowheads and setting bones.
Antiseptics
Monks and scientists discovered some valuable plants with powerful anesthetic and antiseptic qualities.
People used wine as an antiseptic for washing out wounds and preventing further infection.
This would have been an empirical observation, because at that time people had no idea that infections were caused by germs.
As well as wine, surgeons used using ointments and cauterization when treating wounds.
Many saw pus as a good sign that the body was ridding itself of toxins in the blood.
There was little understanding of how infection works. People did not link a lack of hygiene with the risk of infection, and many wounds became fatal for this reason.
Anesthetics
The following natural substances were used by medieval surgeons as anesthetics:
mandrake roots
opium
gall of boar
hemlock
Medieval surgeons became experts in external surgery, but they did not operate deep inside the body.
Trepanning
Some patients with neurological disorders, such as epilepsy, would have a hole drilled into their skulls “to let the demons out.” The name of this is trepanning.
Epidemics
At this time, Europe started trading with nations from all over the world. This improved wealth and and living standards, but it also exposed people to pathogens from faraway lands.
Plagues
The plague of Justinian was the first recorded pandemic. Lasting from 541 into the 700s, historians believe it killed half the population of Europe.
The Black Death started in Asia and reached in Europe in the 1340s, killing 25 million.
Medical historians believe Italian merchants brought it to Europe when they fled the fighting in Crimea.
Historians say the Mongols catapulted dead bodies over the walls of Kaffa, in the Crimea, to infect enemy soldiers. This is probably the first example of biological warfare. This may have triggered the spread of infection into Europe.
From the 1450s onwards, as the Middle Ages gave way to the Renaissance, the Age of Discovery. This brought new challenges and solutions.
Girolamo Fracastoro (1478–1553), an Italian doctor and scholar, suggested that epidemics may come from pathogens outside the body. He proposed that these might pass from human-to-human by direct or indirect contact.
He introduced the term “fomites,” meaning tinder, for items, such as clothing, that could harbor pathogens from which another person could catch them.
He also suggested using mercury and “guaiaco” as a cure for syphilis. Guiaiaco is the oil from the Palo Santo tree, a fragrance used in soaps.
Andreas Vesalius (1514–1564), a Flemish anatomist and physician, wrote one of the most influential books on human anatomy “De Humani Corporis Fabrica” (“On the Structure of the Human Body”).
He dissected a corpse, examined it, and detailed the structure of the human body.
Technical and printing developments of the time meant that he was able to publish the book.
William Harvey (1578–1657), an English doctor, was the first person to properly describe the systemic circulation and properties of blood, and how the heart pumps it around the body.
Avicenna had begun this work in 1242 C. E., but he had not fully understood the pumping action of the heart and how it was responsible for sending blood to every part of the body.
Paracelsus (1493–1541), a German-Swiss doctor, scholar, and occultist, pioneered the use of minerals and chemicals in the body.
He believed that illness and health relied on the harmony of man with nature. Rather than soul purification for healing, he proposed that a healthy body needed certain chemical and mineral balances. He added that chemical remedies could treat some illnesses.
Paracelsus wrote about the treatment and prevention strategies for metalworkers and detailed their occupational hazards.
Share on PinterestDuring the renaissance, Leonardo da Vinci and others made technical drawings that helped people to understand how the body works.
Leonardo Da Vinci (1452–1519), from Italy, was skilled in several different fields. He became an expert in anatomy and made studies of tendons, muscles, bones, and other features of the human body.
He had permission to dissect human corpses in some hospitals. Working with doctor Marcantonio della Torre, he created over 200 pages of illustrations with notes about the human anatomy.
Da Vinci also studied the mechanical functions of bones and how the muscles made them move. He was one of the first researchers of biomechanics.
Ambroise Paré (1510–1590), from France, helped lay the foundations for modern forensic pathology and surgery.
He was the royal surgeon for four French kings and an expert in battlefield medicine, particularly wound treatment and surgery. He invented several surgical instruments.
Paré once treated a group of wounded patients in two ways: cauterization and boiled elderberry oil. However, he ran out of oil and treated the rest of the second group with turpentine, oil of roses, and egg yolk.
The following day, he noticed that those he had treated with turpentine had recovered, while those who received the boiling oil were still in severe pain. He realized how effective turpentine was in treating wounds, and virtually abandoned cauterization from then on.
Paré also revived the Greek method of ligature of the arteries during amputation, instead of cauterization.
This method significantly improved survival rates. This an important breakthrough in surgical practice, despite the risk of infection.
Paré also believed that phantom pains, sometimes experienced by amputees, were related to the brain, and not something mysterious within the amputated limb.
Infections and epidemics
Share on PinterestThe Black Death killed millions of people as it appeared and reappeared over several hundred years.
Common problems at this time included smallpox, Hansen’s disease (leprosy), and the Black Death, which continued to reappear from time to time. In 1665–1666, the Black Death killed 20 percent of the population of London.
While the Black Death came from Asia, people traveling from Europe to other parts of the world also exported some deadly pathogens.
Before the Spanish explorers landed in the Americas, deadly influenza, measles and smallpox did not occur there.
Native Americans had no immunity against such diseases, making them particularly deadly.
Within 60 years of Columbus arriving in 1492 C.E., the population of the island of Hispaniola, for example, fell from at least 60,000 to fewer than 600, according to one source, due to smallpox and other infections.
In mainland South and Central America, the smallpox virus and other infections killed millions of people within 100 years of Columbus’ arrival.
Diagnosis and treatment
Methods of diagnosis did not improve much from as the Middle Ages turned into the early Renaissance.
Physicians still did not know how to cure infectious diseases. When faced with the plague or syphilis, they often turned to superstitious rites and magic.
At one time, doctors asked King Charles II to help by touching sick people in an attempt to cure them of scrofula, a type of tuberculosis (TB). Another name for scofula was “The King’s Evil.”
Explorers discovered quinine in the New World and used it to treat malaria.
Edward Anthony Jenner (1749-1823) was an English doctor and scientist, known as the pioneer of vaccinations. He created the smallpox vaccine.
As early as 430 B.C.E., history shows that people who had recovered from smallpox used to help treat those with the disease, because they appeared to be immune.
In the same way, Jenner noticed that milkmaids tended to be immune to smallpox. He wondered whether the pus in the cowpox blisters protected them from smallpox. Cowpox is similar to smallpox but milder.
In 1796, Jenner inserted pus taken from a cowpox pustule into the arm of James Phipps, an 8-year old boy. He then proved that Phipps was immune to smallpox because of the cowpox “vaccine.”
Others were sceptical, but Jenner’s successful experiments were finally published in 1798. Jenner coined the term “vaccine” from vacca, which in Latin means “cow.”
How we reviewed this article:
Medical News Today has strict sourcing guidelines and draws only from peer-reviewed studies, academic research institutions, and medical journals and associations. We avoid using tertiary references. We link primary sources — including studies, scientific references, and statistics — within each article and also list them in the resources section at the bottom of our articles. You can learn more about how we ensure our content is accurate and current by reading our editorial policy. | What was medieval and Renaissance medicine?
The Medieval Period lasted from around 476 C.E. to 1453 C.E. The Renaissance and the Age of Discovery came after. Medieval medicine typically refers to a combination of natural and supernatural methods.
In southern Spain, North Africa, and the Middle East, Islamic scholars were translating Greek and Roman medical records and literature.
In Europe, however, scientific advances were limited.
Read on to find out more about medicine in the Middle Ages and the Renaissance.
Share on PinterestIn the Middle Ages, the local apothecary or wise woman would provide herbs and potions.
The Early Middle Ages, or Dark Ages, started when invasions broke up Western Europe into small territories run by feudal lords.
Most people lived in rural servitude. Even by 1350, the average life expectancy was 30–35 years, and 1 in 5 children died at birth.
There were no services for public health or education at this time, and communication was poor. Scientific theories had little chance to develop or spread.
People were also superstitious. They did not read or write, and there was no schooling.
Only in the monasteries was there a chance for learning and science to continue. Often, monks were the only people who could read and write.
Around 1066 C.E., things began to change.
The Universities of Oxford and Paris were established. Monarchs became owners of more territory, their wealth grew, and their courts became centers of culture. Learning started to take root. Trade grew rapidly after 1100 C. E., and towns formed.
However, with them came new public health problems.
Medieval medical practice
Across Europe, the quality of medical practitioners was poor, and people rarely saw a doctor, although they might visit a local wise woman, or witch, who would provide herbs or incantations. Midwives, too, helped with childbirth.
The Church was an important institution, and people started to mix or replace their spells and incantations with prayers and requests to saints, together with herbal remedies.
| yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://answersingenesis.org/dinosaurs/were-dinosaurs-on-noahs-ark/ | Were Dinosaurs on Noah's Ark? | Answers in Genesis | Were Dinosaurs on Noah’s Ark?
Were Dinosaurs Even Around Then?
The story we have all heard is that dinosaurs died out 65 million years ago and therefore weren’t around when Noah and company set sail on the Ark around 4,300 years ago.
[Editor’s Note (July 2016): See the latest research by AiG’s Ark Encounter researchers on Ark animals, size, logistics, and other details at ArkEncounter.com.]
The story we have all heard from movies, television, newspapers, and most magazines and textbooks is that dinosaurs “ruled the Earth” for 140 million years, died out 65 million years ago, and therefore weren’t around when Noah and company set sail on the Ark around 4,300 years ago.
However, the Bible gives a completely different view of Earth (and therefore, dinosaur) history. As God’s written Word to us, we can trust it to tell the truth about the past. (For more information about the reliability of Scripture, see Get Answers: Bible.)
Although the Bible does not tell us exactly how long ago it was that God made the world and its creatures, we can make a good estimate of the age of the universe by carefully studying the whole counsel of Scripture:
God made everything in six days, and rested on the seventh. (By the way, this is the basis for our seven day week—Exodus 20:8–11). Leading Hebrew scholars indicate that, based on the grammatical structure of Genesis 1, these “days” were of normal length, and did not represent long periods of time (see Get Answers: Genesis).
We are told God created the first man and woman—Adam and Eve—on Day 6, along with the land animals (which would have included dinosaurs).
The Bible records the genealogies from Adam to Christ. From the ages given in these lists (and accepting that Jesus Christ, the Son of God, came to Earth around 2,000 years ago), we can conclude that the universe is only a few thousand years old (perhaps just 6,000), and not millions of years old (see also “Did Jesus Say He Created in Six Literal Days?”). Thus, dinosaurs lived within the past few thousand years.
So, Were Dinosaurs on the Ark?
In Genesis 6:19–20, the Bible says that two of every sort of land vertebrate (seven of the “clean” animals) were brought by God to the Ark. Therefore, dinosaurs (land vertebrates) were represented on the Ark.
How Did Those Huge Dinosaurs Fit on the Ark?
Although there are about 668 names of dinosaurs, there are perhaps only 55 different “kinds” of dinosaurs. Furthermore, not all dinosaurs were huge like the Brachiosaurus, and even those dinosaurs on the Ark were probably “teenagers” or young adults.
Dogs, wolves, and coyotes are probably from a single canine “kind,” so hundreds of different dogs were not needed.
Creationist researcher John Woodmorappe has calculated that Noah had on board with him representatives from about 8,000 animal genera (including some now-extinct animals), or around 16,000 individual animals as a maximum number. When you realize that horses, zebras, and donkeys are probably descended from the horse-like “kind,” Noah did not have to carry two sets of each such animal. Also, dogs, wolves, and coyotes are probably from a single canine “kind,” so hundreds of different dogs were not needed.
According to Genesis 6:15, the Ark measured 300 x 50 x 30 cubits, which is about 510 x 85 x 51 feet, with a volume of about 2.21 million cubic feet. Researchers have shown that this is the equivalent volume of over 500 semitrailers of space.1
Without getting into all the math, the 16,000-plus animals would have occupied much less than half the space in the Ark (even allowing them some moving-around space).
Conclusion
The Bible is reliable in all areas, including its account of the Ark (and the worldwide catastrophic Flood). A Christian doesn’t have to have a blind faith to believe that there really was an Ark. What the Bible says about the Ark can even be measured and tested today.
For answers to other objections about the biblical account of Noah’s Flood and the Ark (e.g., Where did all the water come from? How did Noah collect and then care for the animals? etc.), see the books featured below. “Was There Really a Noah’s Ark & Flood?” covers these particular “problems” related to Noah’s Flood, and Noah’s Ark: A Feasibility Study covers these and more in detail. | Thus, dinosaurs lived within the past few thousand years.
So, Were Dinosaurs on the Ark?
In Genesis 6:19–20, the Bible says that two of every sort of land vertebrate (seven of the “clean” animals) were brought by God to the Ark. Therefore, dinosaurs (land vertebrates) were represented on the Ark.
How Did Those Huge Dinosaurs Fit on the Ark?
Although there are about 668 names of dinosaurs, there are perhaps only 55 different “kinds” of dinosaurs. Furthermore, not all dinosaurs were huge like the Brachiosaurus, and even those dinosaurs on the Ark were probably “teenagers” or young adults.
Dogs, wolves, and coyotes are probably from a single canine “kind,” so hundreds of different dogs were not needed.
Creationist researcher John Woodmorappe has calculated that Noah had on board with him representatives from about 8,000 animal genera (including some now-extinct animals), or around 16,000 individual animals as a maximum number. When you realize that horses, zebras, and donkeys are probably descended from the horse-like “kind,” Noah did not have to carry two sets of each such animal. Also, dogs, wolves, and coyotes are probably from a single canine “kind,” so hundreds of different dogs were not needed.
According to Genesis 6:15, the Ark measured 300 x 50 x 30 cubits, which is about 510 x 85 x 51 feet, with a volume of about 2.21 million cubic feet. Researchers have shown that this is the equivalent volume of over 500 semitrailers of space.1
Without getting into all the math, the 16,000-plus animals would have occupied much less than half the space in the Ark (even allowing them some moving-around space).
Conclusion
The Bible is reliable in all areas, including its account of the Ark (and the worldwide catastrophic Flood). A Christian doesn’ | yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.icr.org/article/were-dinosaurs-noahs-ark | Were Dinosaurs on Noah's Ark? | The Institute for Creation Research | When visitors inspect ICR’s seven-and-a-half-foot-long model of Noah’s Ark, the dinosaur figurines on the bottom deck tend to catch their eyes. They often ask about those dinosaurs, giving our tour guides a chance to explain how dinosaurs fit in biblical history.
First, God created each dinosaur as a “beast of the earth” on Day Six of the creation week just before creating Adam and Eve.1 Dinosaurs lived at the same time as man for about 1,650 years before the Flood came.2 However, dinosaurs may have mainly lived far away from people since dinosaur fossils occur with shallow marine and swamp-living plants and animals and not with human fossils. Soon after creation, Adam and Eve sinned, so God said, “Cursed is the ground for your sake.”3 This curse affected everything, and eventually all men, and apparently even animals, became so corrupt in their violence4 that God cleansed the whole earth of their filth when “the world that then existed perished, being flooded with water.”5 The Flood made dinosaur fossils.
God told Noah, “Of every living thing of all flesh you shall bring two of every sort into the ark, to keep them alive with you; they shall be male and female.”6 So we know that representatives of each kind of dinosaur went on the Ark. Genesis also indicates that animals on the Ark had nostrils and lived on land, which dinosaur skulls and legs reveal.7 Fossils show that even the largest dinosaurs hatched from eggs not much larger than a football. Noah’s family would likely have taken young sauropods on board the Ark—not full-grown, 100-foot dinosaurs. Most of the other 60 or so dinosaur kinds would have occupied only one corner of one of the Ark’s three decks—like the model on the ICR campus shows.8
After the Flood, dinosaurs and all the other Ark animals migrated from the Middle East to the habitats they preferred. Dinosaurs probably headed to swampy places that became deserts centuries later.9 Genesis 13:10 says, “And Lot lifted his eyes and saw all the plain of Jordan, that it was well watered everywhere (before the Lord destroyed Sodom and Gomorrah) like the garden of the Lord, like the land of Egypt.” The Jordan plain near the Dead Sea began drying after Sodom’s fiery destruction. Egypt also dried.10 Any dinosaurs in these areas would have moved or died when their habitats dried.
The final Bible dinosaur scene comes from Job.11 Clues that behemoth best matches a sauropod include its supreme strength and power, its swampy habitat, its reference as the “first of the ways of God”—suggesting it was the largest created land-living creature—and its tail like a cedar tree.12,13 Job lived after the Flood, so if he could “look now at the behemoth,” and if behemoth was a dinosaur, then some dinosaurs survived the Flood on Noah’s Ark.14
Eventually dinosaurs around the world went extinct, likely because the closing Ice Age brought radical climate changes and people drained swamps and killed off threatening creatures. Memorable encounters gave rise to dragon legends, written descriptions, paintings, and carvings of dinosaurs from around the world.15
Were dinosaurs on Noah’s Ark? History both inside and outside the Bible says, “Yes.”
Interestingly, God told Job, “Look now at the behemoth, which I made along with you” (Job 40:15). Does this refer to God having made behemoth and mankind on creation Day Six? Also, Abraham and Lot may have seen the Job 40:21 reed and marsh lands within the “plain of Jordan,” since behemoth “is confident, though the Jordan gushes into his mouth” (Job 40:23). | God told Noah, “Of every living thing of all flesh you shall bring two of every sort into the ark, to keep them alive with you; they shall be male and female.”6 So we know that representatives of each kind of dinosaur went on the Ark. Genesis also indicates that animals on the Ark had nostrils and lived on land, which dinosaur skulls and legs reveal.7 Fossils show that even the largest dinosaurs hatched from eggs not much larger than a football. Noah’s family would likely have taken young sauropods on board the Ark—not full-grown, 100-foot dinosaurs. Most of the other 60 or so dinosaur kinds would have occupied only one corner of one of the Ark’s three decks—like the model on the ICR campus shows.8
After the Flood, dinosaurs and all the other Ark animals migrated from the Middle East to the habitats they preferred. Dinosaurs probably headed to swampy places that became deserts centuries later.9 Genesis 13:10 says, “And Lot lifted his eyes and saw all the plain of Jordan, that it was well watered everywhere (before the Lord destroyed Sodom and Gomorrah) like the garden of the Lord, like the land of Egypt.” The Jordan plain near the Dead Sea began drying after Sodom’s fiery destruction. Egypt also dried.10 Any dinosaurs in these areas would have moved or died when their habitats dried.
| yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://answersingenesis.org/dinosaurs/humans/dinosaurs-ark-how-possible/ | Dinosaurs on the Ark: How It Was Possible | Answers in Genesis | Dinosaurs on the Ark: How It Was Possible
How dinosaurs lived with man, how they were preserved on Noah’s ark—likely as juveniles—and what happened to dinosaurs after the flood
People often wonder how all of the animals could have fit on the ark. Often, “bathtub arks” are loaded down with various species of animals, rather than the biblical kind, which is approximately at the family level of biological classification. Noah didn’t need to bring lions, leopards, and tigers onto the ark, just a single pair from the cat kind. We see so many illustrations of large creatures packed tightly into a little boat. But this image is inaccurate too. Noah’s ark was so much larger than it is usually depicted, and many of the animals were probably smaller than are shown in popular pictures.
The Dinosaur “Hurdle”
But the biggest hurdle people have when they see any of our displays of the ark (or visit the Ark Encounter) is seeing dinosaurs depicted on the ark (or in stalls in the Ark Encounter). Due to evolutionary indoctrination, many people can’t picture man living alongside dinosaurs, or if they do, they think of the Jurassic Park/World movies and view all dinosaurs as wanting to trample or eat people. Even if they overcome or set aside this stumbling block, we still get questions of how dinosaurs could even fit on the ark, particularly when considering the massive dinosaurs, especially the sauropods. Other oft-cited “problems” with dinosaurs on the ark are feeding the herbivores the massive amounts of vegetation that the adults eat, feeding the carnivorous ones (and avoiding being eaten by them), and cleaning up after them.
It makes more sense to think that God would have sent to Noah juveniles (or sub-adults) or smaller varieties within the same kind.
It makes more sense to think that God would have sent to Noah juveniles (or sub-adults) or smaller varieties within the same kind. Consider the following advantages to bringing juveniles or smaller versions of a creature: they take up less space, they eat less, they create less waste, they are often more docile and easier to manage, they are generally less susceptible to injury, and they would have more time to reproduce after the flood. And considering this last point, wasn’t that the end goal of bringing them on board the ark: to keep them alive and to ensure that they would “be fruitful and multiply on the earth" (Genesis 8:17)? Bringing a full-sized dinosaur that only had a few years left and/or was past its reproductive prime seems illogical and wasteful. Neither of those is characteristic of God.
How Could Noah Have Fed the Carnivorous and Vegetarian Dinosaurs?
Regarding carnivorous activity, we know from the fossil record (most of which is a testimony of the worldwide, globe-covering flood) that some animals were carnivores in the post-fall/pre-flood world. But even if carnivory was prevalent in the late pre-flood world, it is still possible the animals that God sent did not eat meat or were omnivores that could have survived for one year without meat. There have been modern examples of animals normally considered to be carnivores that refused to eat meat, such as the lion known as Little Tyke. Additionally during times of war or natural disaster when meat was unobtainable, zoos and wildlife parks have utilized meat substitutes1 like nuts, peanut butter, coconuts, beans, soy, and other legumes as their protein-source feed for the animals.2
However, if some of the ark’s animals did eat meat, there are several methods of preserving or supplying their food. Meat can be preserved through drying, smoking, salting, or pickling. Certain fish can pack themselves in mud and survive for years without water—these could have been stored on the ark. Noah may have also brought mealworms and other insects onto the ark as food, and these can be bred for both carnivores and insectivores, providing even necessary amino acids, like taurine. Cricket or grasshopper flour could be baked into breads, as could the ground seeds of gourds. And plants like amaranth and quinoa yield high protein feed. Yeast paste and dried seaweed also contain high amounts of protein and taurine, so Noah quite likely had many options available to him. And occasionally in desperate times, like during the Nazi siege of Leningrad in 1941, even obligate carnivores (in this case, a tiger) have switched to vegetarian diets and survived for several years.3
For the plant-eating dinosaurs, the animals brought on board could have eaten compressed hay, other dried grasses, dried vegetables, seeds and grains, legumes, etc. Another factor that may have reduced food consumption for both vegetarian and carnivorous dinosaurs is that they went into a state of hibernation/brumation or torpor. Many reptiles today begin to eat less, reduce their metabolic rate drastically, and then “sleep” for long periods of time when the weather gets a little cooler, virtually eating nothing and waking up only for brief periods to drink before reentering brumation. Often the best conditions for this state are humidity, temperatures between 50° and 68° F (10° and 20° C), and low-light conditions.4 The outside weather at the time of the flood (rainy and thus likely cool) combined with the lower-light interior compartments of the ark would make ideal hibernation/brumation conditions on the ark. If any of the dinosaurs and other reptiles and amphibians went into brumation, then food requirements would have been severely reduced.
Crunching the Numbers
Noah also did not have to bring marine animals, bacteria, fungi, or plants (except as possible food sources) and many (if any) insects onto the ark.
Noah also did not have to bring marine animals, archaebacteria, bacteria, fungi, or plants (except as food sources and possibly a few hardier live plants/fungi for fresh food) and many (if any) insects onto the ark. Even current estimates are that there are fewer than 34,000 species of known, land-dependent vertebrates in the world today.5
Studies beginning in 2012 estimate that there are fewer than 1,400 known living and extinct kinds among land-dependent vertebrates. In a worst-case scenario, it is projected that Noah was responsible for about 6,700 individual animals—most of them small and easily maintained. Even if the biblical kind were expanded from the family to the genus level (4416 genera-according to a 2013 study)6, we would still be talking about approximately 16,000 animals total, as in the case study done by researcher John Woodmorappe in 2007.7
Based on the Hebrew common cubit (which Woodmorappe used) that, from records, was 18 inches, we can calculate that Noah’s ark was 450’ L x 75’ W x 45’ H—large enough to contain approximately 350 semi-truck trailers. Based on the Hebrew royal cubit, which we know from estimates of a cubit and a handbreadth (Ezekiel 40:5 and 43:13) to be equal to 20.4 inches (and believed to be the older cubit measurement), we can calculate that Noah’s ark was 510’ L x 85’ W x 51’ H—large enough to contain approximately 450 semi-truck trailers. The Hebrew royal cubit is the standard we used when we built the Ark Encounter’s ark.
Back to the dinosaurs, the average dinosaur is about the size of a bison, and the estimated number of dinosaur kinds (of which only a pair of each was brought) may have been about 85—-meaning a maximum of 170 dinosaurs were taken aboard the ark. Therefore, the ark had adequate space for every kind of dinosaur, particularly if God sent sub-adults to Noah. For example, the average 24 x 7 ft. cattle trailer can safely haul a maximum of eleven 1,200-pound cows.8 If cattle were transported with a common semi-truck trailer9 of 53 x 8.5 ft., an average of 28 such animals could be safely hauled.10 That means even assuming the maximum number of dinosaur kinds, only six of the 450 semi-truck trailer storage capacity (for a royal cubit sized-ark) would be needed for the dinosaurs.
But How Could Noah Care for the Dinosaurs on the Ark?
Creation researcher John Woodmorappe has written extensively on this topic in his book, Noah’s Ark: A Feasibility Study. Some of the methodologies he mentioned were practical, labor-saving devices like food and water troughs, slatted floors for waste disposal, ventilation systems, and lighting. Answers in Genesis also has a detailed book, a children’s book, and several articles on ventilation and lighting, animal care, logistical questions, as well as several videos of how Noah and his family could have prepared for and taken care of the animals.
We need to keep in mind that Noah was a very intelligent man and was obeying God’s commands by faith (Hebrews 11:7). And it was God’s desire that the animals on the ark were well cared for and able to disembark healthy and repopulate the new world. To use a perhaps overused phrase, but in this case highly appropriate, God did not set Noah up for failure but set him up for success. After all it was God’s own promises to Noah that were on the line (Genesis 6:20-23, 7:1-3), and God does not go back on his Word (Numbers 23:19; Titus 1:2). Although Scripture records only a few basic instructions on the design of the ark given by God to Noah, it is safe to assume that he providentially guided Noah to ensured that the ark was designed well, durable, and that it could do what he intended it to. Some good resources for looking at the technical aspects, and even possible floor plans for the layout of the ark, its animal enclosures, and labor-saving technologies can also be found on the Answers in Genesis website: Caring for the Animals on the Ark, How Could Noah Fit the Animals on the Ark and Care for Them? and Was There Really a Noah’s Ark & Flood?.
Dinosaurs Were on the Ark and Dinosaurs Came off the Ark
The evolutionary story is that dinosaurs died out 66 million years ago, long before humans evolved. But Scripture tells a quite different account.
The evolutionary story is that dinosaurs died out 66 million years ago, long before humans evolved. But Scripture tells a quite different account. All land animals were created on day six of creation week. And two of every unclean land animal were commanded by God to be brought onto the ark. Therefore Scripture testifies that dinosaurs would have survived the flood and coexisted with mankind. Then later in history we read of Behemoth, likely a sauropod dinosaur (Job 40), marine reptiles like Leviathan (Job 41, Psalm 104:25–26), and numerous references to dragons and flying serpents (possibly pterosaurs) throughout the Bible—all living alongside mankind. And man, specially created in the image of God (Genesis 1:27), was given dominion over all of God’s creation (Genesis 1:28).
But dinosaurs did not prosper in the post-flood world, and they died out for many of the same reasons that some animals go extinct today. The post-flood world was radically different from the tropical/semi-tropical pre-flood world, and many plant species that the herbivorous dinosaurs likely fed on (cycads and gymnosperms) went extinct or were severely reduced in number and variety. Predation on some of the smaller dinosaurs by larger ones or large mammals and disease also could have contributed to their demise. Mankind may have hunted some dinosaurs for meat or destroyed them because they ravaged crops or were a threat to human survival. As Ken Ham has written, as they began to be seen less and less, they faded from memory and later remembered as legends. Dragon legends though likely containing kernels of truth about the size and ferocity of some of the larger dinosaurs, became stories told around campfires and hearths. But the very nature of their encounters with mankind after the flood in Scripture and in several historical accounts is testimony to their survival on the ark and their continued presence with man for a few thousand years afterward. | And it was God’s desire that the animals on the ark were well cared for and able to disembark healthy and repopulate the new world. To use a perhaps overused phrase, but in this case highly appropriate, God did not set Noah up for failure but set him up for success. After all it was God’s own promises to Noah that were on the line (Genesis 6:20-23, 7:1-3), and God does not go back on his Word (Numbers 23:19; Titus 1:2). Although Scripture records only a few basic instructions on the design of the ark given by God to Noah, it is safe to assume that he providentially guided Noah to ensured that the ark was designed well, durable, and that it could do what he intended it to. Some good resources for looking at the technical aspects, and even possible floor plans for the layout of the ark, its animal enclosures, and labor-saving technologies can also be found on the Answers in Genesis website: Caring for the Animals on the Ark, How Could Noah Fit the Animals on the Ark and Care for Them? and Was There Really a Noah’s Ark & Flood?.
Dinosaurs Were on the Ark and Dinosaurs Came off the Ark
The evolutionary story is that dinosaurs died out 66 million years ago, long before humans evolved. But Scripture tells a quite different account.
The evolutionary story is that dinosaurs died out 66 million years ago, long before humans evolved. But Scripture tells a quite different account. All land animals were created on day six of creation week. And two of every unclean land animal were commanded by God to be brought onto the ark. Therefore Scripture testifies that dinosaurs would have survived the flood and coexisted with mankind. Then later in history we read of Behemoth, likely a sauropod dinosaur (Job 40), marine reptiles like Leviathan (Job 41, Psalm 104:25–26), and numerous references to dragons and flying serpents (possibly pterosaurs) | yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.gotquestions.org/dinosaurs-Noahs-ark.html | Were there dinosaurs on Noah's ark? | GotQuestions.org | Find Out
Were there dinosaurs on Noahâs ark?
This question presupposes a young earth on which dinosaurs and humanity coexisted and a global flood in the time of Noah. Not all Christians hold to both, or either, of those viewpoints. So, this is not a question that is relevant for all Christians. However, we believe that, if the Bible is interpreted literally, it leads to young earth creationism and a belief that the flood in the time of Noah was indeed global. So, with that in mind, yes, we believe that there were dinosaurs on the ark. They would not have been called âdinosaursâ because that term didnât exist until about 1841. Here are some reasons why we think that dinosaurs were on Noahâs ark:
We know that âin six days the LORD made the heavens and the earth, the sea, and all that is in themâ (Exodus 20:11). Taking these days to be literal, twenty-four-hour periods, the dinosaurs would have been created on day six with the rest of the land animals (Genesis 1:24–25). Man was created on the same day (verses 26–27). There is nothing in the Bible that suggests a separation of millions of years between the time of the dinosaurs and the existence of mankind. The descriptions of the behemoth and leviathan in Job 40 — 41 lend credence to the idea that men and dinosaurs walked the earth together.
Also arguing for the possibility that dinosaurs and man lived side by side—and thus were on Noahâs ark—are ancient depictions of dinosaur-like animals in cave drawings. Various ancient civilizations in Europe, South America, and North America left behind petroglyphs of what look like dinosaurs. We also see dinosaur-like creatures depicted in architecture on castles in Europe and pyramids in South America. We read accounts of human interaction with âdragons,â with the stories coming from Europe, China, and the Middle East. It would be strange for all of these different civilizations to depict things that no one had ever seen, especially since the depictions closely resemble the fossil remains that we now find.
The account of Noah and the ark is found in Genesis 6 — 7. God tells Noah that he is to take representatives of every living creature on board: âYou are to bring into the ark two of all living creatures, male and female, to keep them alive with you. Two of every kind of bird, of every kind of animal and of every kind of creature that moves along the ground will come to you to be kept aliveâ (Genesis 6:19–20). If dinosaurs were on the earth at that time, then Noah took them on the ark.
A common objection to the idea that there were dinosaurs on the ark—besides the view that humans and dinosaurs never existed at the same time—is that dinosaurs were too big for the ark. The notion that all dinosaurs were three stories tall, of fierce disposition, and bent on eating everything in sight persists in the minds of many. That fact is that the average size of adult dinosaurs was comparable to that of a horse.
But most of the dinosaurs on Noahâs ark would have been even smaller than a horse. To start a new population of animals, Noah would not have begun with old animals past their prime. He would have started with the younger (and therefore smaller) animals of each kind. That huge Apatosaurus skeleton that we see in the museum could have been from an animal several hundred years old. A dinosaur of such size and age would not have been a good candidate for breeding stock for Noah. Noah would naturally have taken juvenile Apatosauruses aboard the ark. Even if the dinosaurs Noah took on board were one year old, most would have been smaller than a full-grown pig. That would mean that there was plenty of room for them (and for their food) on the ark.
Since the Bible does not clearly state that what we call dinosaurs were on the ark, we leave open the possibility that they were not. But, given a young earth interpretation of the early chapters of Genesis, we have no reason to reject the idea that Noah brought dinosaurs onto the ark. | Find Out
Were there dinosaurs on Noahâs ark?
This question presupposes a young earth on which dinosaurs and humanity coexisted and a global flood in the time of Noah. Not all Christians hold to both, or either, of those viewpoints. So, this is not a question that is relevant for all Christians. However, we believe that, if the Bible is interpreted literally, it leads to young earth creationism and a belief that the flood in the time of Noah was indeed global. So, with that in mind, yes, we believe that there were dinosaurs on the ark. They would not have been called âdinosaursâ because that term didnât exist until about 1841. Here are some reasons why we think that dinosaurs were on Noahâs ark:
We know that âin six days the LORD made the heavens and the earth, the sea, and all that is in themâ (Exodus 20:11). Taking these days to be literal, twenty-four-hour periods, the dinosaurs would have been created on day six with the rest of the land animals (Genesis 1:24–25). Man was created on the same day (verses 26–27). There is nothing in the Bible that suggests a separation of millions of years between the time of the dinosaurs and the existence of mankind. The descriptions of the behemoth and leviathan in Job 40 — 41 lend credence to the idea that men and dinosaurs walked the earth together.
Also arguing for the possibility that dinosaurs and man lived side by side—and thus were on Noahâs ark—are ancient depictions of dinosaur-like animals in cave drawings. Various ancient civilizations in Europe, South America, and North America left behind petroglyphs of what look like dinosaurs. We also see dinosaur-like creatures depicted in architecture on castles in Europe and pyramids in South America. We read accounts of human interaction with âdragons,â with the stories coming from Europe, China, and the Middle East. | yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.reuters.com/article/us-usa-museum/new-museum-says-dinosaurs-were-on-noahs-ark-idUSN2621240720070526 | New museum says dinosaurs were on Noah's Ark | Reuters | New museum says dinosaurs were on Noah's Ark
PETERSBURG, Ky (Reuters) - Like many modern museums, the newest U.S. tourist attraction includes some awesome exhibits -- roaring dinosaurs and a life-sized ship.
Slideshow ( 2 images )
But only at the Creation Museum in Kentucky do the dinosaurs sail on the ship -- Noah’s Ark, to be precise.
The Christian creators of the sprawling museum, unveiled on Saturday, hope to draw as many as half a million people each year to their state-of-the-art project, which depicts the Bible’s first book, Genesis, as literal truth.
While the $27 million museum near Cincinnati has drawn snickers from media and condemnation from U.S. scientists, those who believe God created the heavens and the Earth in six days about 6,000 years ago say their views are finally being represented.
“What we’ve done here is to give people an opportunity to hear information that is not readily available ... to challenge them that really you can believe the Bible’s history,” said Ken Ham, president of the group Answers in Genesis that founded the museum.
Here exhibits show the Grand Canyon took just days to form during Noah’s flood, dinosaurs coexisted with humans and had a place on Noah’s Ark, and Cain married his sister to people the earth, among other Biblical wonders.
Scientists, secularists and moderate Christians have pledged to protest the museum’s public opening on Monday. An airplane trailing a “Thou Shalt Not Lie” banner buzzed overhead during the museum’s opening news conference.
Opponents argue that children who see the exhibits will be confused when they learn in school that the universe is 14 billion years old rather than 6,000.
“Teachers don’t deserve a student coming into class saying ‘Gee Mrs. Brown, I went to this fancy museum and it said you’re teaching me a lie,’” Dr. Eugenie Scott, executive director of the National Center for Science Education, told reporters before the museum opened.
A Gallup poll last year showed almost half of Americans believe that humans did not evolve but were created by God in their present form within the last 10,000 years.
Three of 10 Republican presidential candidates said in a recent debate that they did not believe in evolution. | New museum says dinosaurs were on Noah's Ark
PETERSBURG, Ky (Reuters) - Like many modern museums, the newest U.S. tourist attraction includes some awesome exhibits -- roaring dinosaurs and a life-sized ship.
Slideshow ( 2 images )
But only at the Creation Museum in Kentucky do the dinosaurs sail on the ship -- Noah’s Ark, to be precise.
The Christian creators of the sprawling museum, unveiled on Saturday, hope to draw as many as half a million people each year to their state-of-the-art project, which depicts the Bible’s first book, Genesis, as literal truth.
While the $27 million museum near Cincinnati has drawn snickers from media and condemnation from U.S. scientists, those who believe God created the heavens and the Earth in six days about 6,000 years ago say their views are finally being represented.
“What we’ve done here is to give people an opportunity to hear information that is not readily available ... to challenge them that really you can believe the Bible’s history,” said Ken Ham, president of the group Answers in Genesis that founded the museum.
Here exhibits show the Grand Canyon took just days to form during Noah’s flood, dinosaurs coexisted with humans and had a place on Noah’s Ark, and Cain married his sister to people the earth, among other Biblical wonders.
Scientists, secularists and moderate Christians have pledged to protest the museum’s public opening on Monday. An airplane trailing a “Thou Shalt Not Lie” banner buzzed overhead during the museum’s opening news conference.
Opponents argue that children who see the exhibits will be confused when they learn in school that the universe is 14 billion years old rather than 6,000.
“Teachers don’t deserve a student coming into class saying ‘Gee Mrs. Brown, I went to this fancy museum and it said you’re teaching me a lie,’” Dr. Eugenie Scott, executive director of the National Center for Science Education, told reporters before the museum opened.
| yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.thedailybeast.com/dinosaurs-on-noahs-ark-only-at-kentuckys-creation-museum | Dinosaurs on Noah's Ark? Only at Kentucky's Creation Museum | Jeff Haynes/AFP via Getty
Summer travel in the United States has slowed but not stopped due to the coronavirus pandemic.
Among those destinations that have recently reopened is, as of June 8, the Creation Museum, a museum dedicated to promoting the Biblical story of Genesis as historic and scientific fact.
More than this, the Creation Museum offers a window into the ideas and workings of the American religious right.
Evangelical Christians make up approximately 25 percent of the U.S. population. A majority of them think the Bible should be read literally and that evolution is false.
The Creation Museum, about which we wrote a book in 2016, promotes a very specific version of this belief, which holds that God made the universe in six 24-hour days about 6,000 years ago.
The first four chapters of the book of Genesis tell the story of Adam and Eve, who were created on the sixth day and given two jobs: to obey God and populate the Earth. When they disobeyed God and ate fruit from the tree of knowledge, they were banished from the Garden of Eden and became mortal.
Adam and Eve did better on their second assignment, though. Eve gave birth to two sons, Cain and Abel, and, according to the Creation Museum, to a daughter who later became Cain’s wife.
According to Genesis, humans eventually became wicked and violent. In response, God sent a global flood that drowned everyone on the planet; the Creation Museum says the dead numbered in the billions.
Only righteous Noah and his family were saved. They, along with some animals—including, according to the Creation Museum, dinosaurs—were safely housed in the ark that God commanded Noah to build.
Since opening in 2007, the Creation Museum has told this story—with an abundance of dinosaur displays and life-size dioramas of the idyllic Garden of Eden—to more than 4 million visitors.
A stone-age woman sits next to a dinosaur in the front lobby at the Creation Museum—even though there is no scientific evidence the two co-existed.
Brittany Greeson/Reuters
Advertisement
Creationism is a central tenet of Protestant fundamentalism, an American evangelical movement that has its roots in the late 19th century just as Darwinian evolution was undermining the story of Genesis.
Around that same time, scholars were also asking substantive questions about who actually wrote the 66 books of the Bible, noting some of its apparent inconsistencies and errors and observing that some of its stories—including that of the giant flood—seemed borrowed from other cultures.
Some conservative evangelical theologians, appalled by the undermining of biblical authority, responded by creating the notion of biblical inerrancy. In this view, the Bible is without error, clearly written and factually accurate—including when it comes to history and science.
The fundamentalist movement emerged in 1919, holding to biblical inerrancy and creationism. They did, however, accept geologists’ assertions that Earth was millions or billions of years old, based on its many layers of rock.
As such, fundamentalists understood God’s six “days” of creation to refer not to 24-hour days, but to eras of indeterminate length.
This posed a problem for biblical inerrancy. If the Bible is best understood literally, how can a “day” be an era?
Advertisement
In 1961 Bible scholar John Whitcomb Jr. and engineer Henry Morris came to the rescue with their book, “The Genesis Flood.” Borrowing heavily from the Seventh-day Adventist George McCready Price—who had spent decades defending his own faith’s belief that God created Earth in six days—Morris and Whitcomb argued that it was Noah’s flood that created Earth’s layers.
In this theory, the planet’s geological strata only give the impression that the Earth is ancient, when in fact these layers were created 6,000 years ago by a global flood that lasted a year.
Young Earth creationism spread through American fundamentalism with astonishing speed in the late 20th century. Among the many Christian organizations established to advance these ideas is Answers in Genesis, or AiG. Founded in 1994 in Petersburg, Kentucky, AiG is a young Earth creationist juggernaut, producing a flood of creationist books, videos, magazines, school curricula and other print and digital materials each year.
As we document in our book, AiG is also heavily invested in the white evangelical right-wing politics that in 2016 helped secure the presidency for Donald Trump.
The 75,000-square-foot Creation Museum, located next to AiG’s main office and down the road from its giant replica of Noah’s Ark is the jewel of AiG’s close to US$50 million assets.
Advertisement
Only at the Creation Museum do dinosaurs sail on Noah’s Ark.
John Sommers II/Reuters
Though the Trump administration derides science and scientists, AiG chief executive Ken Ham claims to be a fan.
In a 2014 debate with Bill Nye, popularly known as “the science guy,” that has been viewed nearly 8 million times on YouTube, Ham said the word “science” 105 times—twice as often as Nye. “I love science!” Ham insisted.
But contemporary mainstream science is defined by its use of the scientific method, in which scientists formulate a hypothesis, conduct experiments to test that hypothesis and then confirm or deny it.
By contrast, creationists begin with a conclusion—that the universe is 6,000 years old—then seek evidence to confirm it. Contravening facts, such as radiometric dating that shows the Earth to be 4.5 billion years old, are rejected.
This exhibit explains in great and seemingly accurate scientific detail that the Allosaurus’s skull is 34 inches long, 22 inches high and has 53 teeth that are about 4.5 inches long, if you include the roots.
Then it states that this Allosaurus perished in Noah’s flood. Those scouring the placards for empirical evidence that dinosaurs scrambled up a hilltop to escape the rising waters will come up short.
Mainstream geologists and biologists will probably find the Creation Museum more frustrating than educational. But for those hoping to better understand the divides of modern American society, the museum is illuminating. It shines a light on the worldview held by a segment of the U.S. population with significant economic resources and political connections at the highest rungs of power.
William Trollinger is a professor of history at the University of Dayton and Susan L. Trollinger is a professor of English at the University of Dayton | Adam and Eve did better on their second assignment, though. Eve gave birth to two sons, Cain and Abel, and, according to the Creation Museum, to a daughter who later became Cain’s wife.
According to Genesis, humans eventually became wicked and violent. In response, God sent a global flood that drowned everyone on the planet; the Creation Museum says the dead numbered in the billions.
Only righteous Noah and his family were saved. They, along with some animals—including, according to the Creation Museum, dinosaurs—were safely housed in the ark that God commanded Noah to build.
Since opening in 2007, the Creation Museum has told this story—with an abundance of dinosaur displays and life-size dioramas of the idyllic Garden of Eden—to more than 4 million visitors.
A stone-age woman sits next to a dinosaur in the front lobby at the Creation Museum—even though there is no scientific evidence the two co-existed.
Brittany Greeson/Reuters
Advertisement
Creationism is a central tenet of Protestant fundamentalism, an American evangelical movement that has its roots in the late 19th century just as Darwinian evolution was undermining the story of Genesis.
Around that same time, scholars were also asking substantive questions about who actually wrote the 66 books of the Bible, noting some of its apparent inconsistencies and errors and observing that some of its stories—including that of the giant flood—seemed borrowed from other cultures.
Some conservative evangelical theologians, appalled by the undermining of biblical authority, responded by creating the notion of biblical inerrancy. In this view, the Bible is without error, clearly written and factually accurate—including when it comes to history and science.
The fundamentalist movement emerged in 1919, holding to biblical inerrancy and creationism. They did, however, accept geologists’ assertions that Earth was millions or billions of years old, based on its many layers of rock.
As such, | yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.nbcnews.com/science/science-news/absolutely-wrong-bill-nye-science-guy-takes-noah-s-ark-n608721 | 'Absolutely Wrong': Bill Nye the Science Guy Takes on Noah's Ark ... | Days after white doves and shofar horns christened the opening of a new Noah's Ark attraction in northern Kentucky, the land-anchored ship welcomed a conspicuous and curious visitor: Bill Nye “the Science Guy.”
The bow-tied man of science — openly skeptical about the exhibit from the time it was announced — was an invited guest at the Ark Encounter, which opened July 7 and is billed as the largest timber-frame structure in the world, at 51 feet tall and 1-1/2 football fields in length.
Visitors pass outside the front of a replica Noah's Ark at the Ark Encounter theme park during a media preview day on July 5, 2016, in Williamstown, Ky.John Minchillo / AP
Nye had to see the voluminous vessel for himself, and set off for the rolling green vistas of Williamstown, a rural community south of Cincinnati.
What he found, he told NBC News, was an eye-catching attraction that was "much more troubling or disturbing than I thought it would be."
"On the third deck (of the ark), every single science exhibit is absolutely wrong," he said. "Not just misleading, but wrong."
State and local officials are banking on the Bible-based theme park to lure tourists and boost the local economy. The project's creator, Ken Ham, hopes it will attract fundamentalist Christians, some of whom are already visiting its nearby sister site, the Creation Museum, built in 2007.
Ham and Nye have been jousting partners since 2014, when they sparred in a debate over creationism versus evolution that was broadcast online from the Creation Museum and racked up millions of views on YouTube.
In exploring the Ark, the famed children's television host was given a personal tour of the 120,000-square-foot structure by Ham, who has emerged as a prominent voice in the "young Earth" creationist movement.
Ham, who was born in Australia and founded the Answers in Genesis ministry, believes the Bible and its Book of Genesis is literal historical fact — which means the Earth would be only about 6,000 years as opposed to the roughly 4.5 billion years estimated by scientists.
As represented in the Ark exhibit, dinosaurs co-existed with humans. That's also a big departure from the science of evolution, which says they became extinct some 65 million years ago — long before mankind emerged.
Noah's story, as told in Genesis, says he built an ark at God’s request in anticipation of a Great Flood. The patriarch packed up his family and corralled two of every kind of animal in the world to live on the ship — and for that, God spared him and those creatures.
To Nye, that's hogwash, although some scholars are open to the idea that a historic flood of Biblical proportions could have happened and inspired the Noah tale. Scientists, however, say there's no evidence to suggest an epic, worldwide flood occurred within the past 6,000 years.
Nye takes particular exception to the dinosaurs on the ark — re-created in cages among rows of other odd-looking animal replicas. (Plans to house live animals on board had to be scrapped, and there are also fewer animal replicas than planned to make way for restrooms for the visitors.)
A visitor looks into a cage containing a model dinosaur inside a replica Noah's Ark at the Ark Encounter theme park during a media preview day on July 5, 2016, in Williamstown, Ky.John Minchillo / AP
Nye said the exhibit encourages visitors to trust faith over science and thereby undercuts their ability to engage in critical thinking.
"It’s all very troubling. You have hundreds of school kids there who have already been indoctrinated and who have been brainwashed," he said, recalling how one young girl on the Ark told him to change his way of thinking.
"The parents were feeding her word for word," Nye added.
In a Facebook post, Ham said Nye's visit turned into an impromptu "debate" as other visitors huddled around the pair. The experience gave Ham a chance to "share the gospel" with Nye, he said.
"As we ended our walk through the 1st deck in front of life-size models of Noah and his family who were depicted praying, I asked Bill if he would mind if I prayed, and if I could I pray for him. He said I could do whatever I want as he couldn't stop me," Ham wrote. "So while a large group of people were gathered around, I publicly prayed for Bill. I did ask him if we could be friends, but he said we could be acquaintances with mutual respect, but not friends."
Ken Ham, creator of the Ark Encounter, and Bill Nye on July 8, 2016, during a tour of the replica.The Bill Nye Film
Nye said that while he appreciated some of the craftsmanship details that went into building the boat, which was the handiwork of Amish laborers, something else behind the scenes has troubled him.
He takes issue with a tax break that the commonwealth of Kentucky provided to the Ark Encounter — built at a cost of $102 million. Despite opposition in 2014 by former state officials, a federal judge earlier this year ruled that the Ark could take advantage of a state sales tax rebate worth as much as 25 percent of the investment.
While the Ark was paid for by a mix of private donations and municipal bonds backed by the project’s future revenues — adult tickets cost $40, while admission is $28 for children from 5 to 12 — Ham insists that taxpayers aren't on the hook for any costs.
"Only visitors to the Ark Encounter pay the sales tax that generates the possible rebate," he wrote in an editorial last month in the Cincinnati Enquirer.
But critics say there's another questionable perk: a 2 percent tax on employees' gross wages intended to help pay off the attraction over the next 30 years. The park is looking to hire 300 to 400 seasonal jobs.
A requirement that potential employees sign a statement that they are Christian has also raised eyebrows. The hiring practice was upheld in the federal judge’s ruling, which said an exemption to the 1964 Civil Rights Act actually permits the Ark to have a religious requirement for employment.
Nye said the religious element of the theme park itself doesn't worry him — rather, he's concerned about what it's passing off as fact.
"I’m not busting anyone's chops about a religion," he said. "This is about the absolutely wrong idea that the Earth is 6,000 years old that’s alarming to me."
Ham, on the other hand, has accused atheists of being "intolerant bullies" toward people of faith. He is confident both the Ark Encounter and his Creation Museum will see a shared surge in interest.
It's unclear how much money the Ark has earned since opening. But in its first six days, a spokeswoman told NBC News there have been about 30,000 visitors.
Members of a documentary crew that joined Nye told NBC News the crowd appeared thin on the day he visited and the parking lot was mostly empty.
Cars sit in the parking lot adjacent to the Ark Encounter in Williamstown, Ky., on its opening day on July 7.Brendan Hall / The Bill Nye Film
Ham hopes to attract close to 2 million guests in the attraction's first year.
While he and Nye parted ways without finding common ground, he said having Nye visit during the Ark’s premiere week was beneficial.
"To me, it was so fitting that with the opening of the Ark Encounter, this massive ship is being used to witness to such a well-known personality," Ham wrote on Facebook. "We ended with a friendly handshake."
As for the ark property itself, it's not done expanding. Ham said plans for a walled city and Tower of Babel — intended to warn against the dangers of "prejudice and racism" — will be part of a future phase. | As represented in the Ark exhibit, dinosaurs co-existed with humans. That's also a big departure from the science of evolution, which says they became extinct some 65 million years ago — long before mankind emerged.
Noah's story, as told in Genesis, says he built an ark at God’s request in anticipation of a Great Flood. The patriarch packed up his family and corralled two of every kind of animal in the world to live on the ship — and for that, God spared him and those creatures.
To Nye, that's hogwash, although some scholars are open to the idea that a historic flood of Biblical proportions could have happened and inspired the Noah tale. Scientists, however, say there's no evidence to suggest an epic, worldwide flood occurred within the past 6,000 years.
Nye takes particular exception to the dinosaurs on the ark — re-created in cages among rows of other odd-looking animal replicas. (Plans to house live animals on board had to be scrapped, and there are also fewer animal replicas than planned to make way for restrooms for the visitors.)
A visitor looks into a cage containing a model dinosaur inside a replica Noah's Ark at the Ark Encounter theme park during a media preview day on July 5, 2016, in Williamstown, Ky.John Minchillo / AP
Nye said the exhibit encourages visitors to trust faith over science and thereby undercuts their ability to engage in critical thinking.
"It’s all very troubling. You have hundreds of school kids there who have already been indoctrinated and who have been brainwashed," he said, recalling how one young girl on the Ark told him to change his way of thinking.
"The parents were feeding her word for word," Nye added.
In a Facebook post, Ham said Nye's visit turned into an impromptu "debate" as other visitors huddled around the pair. | no |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.billygraham.ca/answer/i-am-eight-years-old-and-i-have-a-question-for-you-did-noahs-ark-have-dinosaurs-in-it-or-did-they-maybe-die-in-the-flood/ | I am eight years old and I have a question for you. Did Noah's ark ... | Your gift helps share the Gospel
Answers
Q:
I am eight years old and I have a question for you. Did Noah's ark have dinosaurs in it? Or did they maybe die in the flood? My parents didn't know the answer and they said to ask you.
A:
Noah lived many thousands of years ago, and the Bible doesn’t give a detailed list of what birds and animals were included on the ark.
The Bible does say, however, that the reason God preserved the animals and birds on the ark was “to keep their various kinds alive throughout the earth” after the flood was gone (Genesis 7:3). Since dinosaurs apparently were extinct a long time before Noah, and also didn’t appear after the flood, it seems unlikely that his ark included such creatures.
Noah’s life is more than just an interesting story, however—and I hope you’ll read it for yourself. (You can find it in Genesis, the first book of the Bible, beginning in chapter 6.) When you read it, I hope you’ll study carefully what kind of person Noah was. He lived in a terrible world—a world that had forgotten God and made fun of Him. But the Bible says, “Noah was a righteous man, blameless among the people of his time, and he walked with God” (Genesis 6:9).
I pray that this will be your goal as you grow older. God loves you, and He wants you to love Him in return. More than that, He wants you to be part of His family and to be with Him in Heaven someday. Ask Jesus to come into your life right now—and He will. Then ask Him to help you to be like Noah, living for Christ and walking with Him every day. God bless you. | Your gift helps share the Gospel
Answers
Q:
I am eight years old and I have a question for you. Did Noah's ark have dinosaurs in it? Or did they maybe die in the flood? My parents didn't know the answer and they said to ask you.
A:
Noah lived many thousands of years ago, and the Bible doesn’t give a detailed list of what birds and animals were included on the ark.
The Bible does say, however, that the reason God preserved the animals and birds on the ark was “to keep their various kinds alive throughout the earth” after the flood was gone (Genesis 7:3). Since dinosaurs apparently were extinct a long time before Noah, and also didn’t appear after the flood, it seems unlikely that his ark included such creatures.
Noah’s life is more than just an interesting story, however—and I hope you’ll read it for yourself. (You can find it in Genesis, the first book of the Bible, beginning in chapter 6.) When you read it, I hope you’ll study carefully what kind of person Noah was. He lived in a terrible world—a world that had forgotten God and made fun of Him. But the Bible says, “Noah was a righteous man, blameless among the people of his time, and he walked with God” (Genesis 6:9).
I pray that this will be your goal as you grow older. God loves you, and He wants you to love Him in return. More than that, He wants you to be part of His family and to be with Him in Heaven someday. Ask Jesus to come into your life right now—and He will. Then ask Him to help you to be like Noah, living for Christ and walking with Him every day. God bless you. | no |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://arkencounter.com/blog/2020/02/21/how-did-all-the-land-animal-kinds-fit-inside-the-ark/ | How Did All the Land Animal Kinds Fit Inside the Ark? | Ark Encounter | How Did All the Land Animal Kinds Fit Inside the Ark?
When people come to the Ark Encounter, they are often amazed by its size. The Bible says in Genesis 6:15 that Noah’s ark was 300 cubits long, 50 cubits wide, and 30 cubits high. When using a 20.4-inch cubit, this translates to 510 feet long, 85 feet wide, and 51 feet high. But even with its huge size, it is difficult to comprehend its capacity.
The Ark Encounter is filled with answers to fundamental questions that often confront Christians today.
What Kinds of Animals Were on the Ark?
The Bible tells us that the ark housed representatives of every land-dependent, air-breathing animal—ones that could not otherwise survive the flood. Therefore, Noah did not care for marine animals, and he probably did not need to bring insects, with the possible exception of delicate insects like butterflies and moths, since most insects could survive outside the ark.
There were two of some animal kinds; certain clean and winged creatures had seven or fourteen. Recent researchers, using broad parameters, estimated the Ark may have about 1,400 animal kinds, which results in close to 7,000 total animals. This number is almost certainly too high.
Were There Dinosaurs on the Ark?
The word dinosaur was not invented until 1841 and is just a term that refers to about 80 families or types of land animals. So, what we are really asking is, “Were all the land animal kinds represented on the ark?” And the answer is yes.
From an evolutionary worldview, there was no ark, global flood, and dinosaurs died out millions of years before man existed. But a biblical worldview indicates that dinosaurs were created only a few thousand years ago with the rest of creation. And since dinosaurs are land-dwelling and air-breathing, God sent two of each kind at the time of Noah to go on the ark.
How Would They Fit and Be Cared For?
Most of the animals would have been juveniles or smaller varieties within a given animal kind. Therefore, the representatives of the biggest animals would take up less space, eat less food, create less waste, and have longer to reproduce after the flood.
Studies of non-mechanized animal care indicate that eight people could have fed and watered 16,000 creatures. One key is to avoid unnecessary distance between the animals and their stored food and water. Even better, drinking water could have been piped into troughs, just as the Chinese have used bamboo pipes for this purpose for thousands of years.
The use of some sort of self-feeders, as is commonly done in modern bird care, would have been relatively easy and probably essential. Animals that required special care or diets were uncommon and would not have needed an inordinate amount of time from the handlers. Even animals with the most specialized diets in nature could have been switched to readily sustainable substitute diets.
These are just some of the questions that can be answered at the Ark Encounter. Plan your trip to see for yourself! | How Did All the Land Animal Kinds Fit Inside the Ark?
When people come to the Ark Encounter, they are often amazed by its size. The Bible says in Genesis 6:15 that Noah’s ark was 300 cubits long, 50 cubits wide, and 30 cubits high. When using a 20.4-inch cubit, this translates to 510 feet long, 85 feet wide, and 51 feet high. But even with its huge size, it is difficult to comprehend its capacity.
The Ark Encounter is filled with answers to fundamental questions that often confront Christians today.
What Kinds of Animals Were on the Ark?
The Bible tells us that the ark housed representatives of every land-dependent, air-breathing animal—ones that could not otherwise survive the flood. Therefore, Noah did not care for marine animals, and he probably did not need to bring insects, with the possible exception of delicate insects like butterflies and moths, since most insects could survive outside the ark.
There were two of some animal kinds; certain clean and winged creatures had seven or fourteen. Recent researchers, using broad parameters, estimated the Ark may have about 1,400 animal kinds, which results in close to 7,000 total animals. This number is almost certainly too high.
Were There Dinosaurs on the Ark?
The word dinosaur was not invented until 1841 and is just a term that refers to about 80 families or types of land animals. So, what we are really asking is, “Were all the land animal kinds represented on the ark?” And the answer is yes.
From an evolutionary worldview, there was no ark, global flood, and dinosaurs died out millions of years before man existed. But a biblical worldview indicates that dinosaurs were created only a few thousand years ago with the rest of creation. And since dinosaurs are land-dwelling and air-breathing, God sent two of each kind at the time of Noah to go on the ark.
How Would They Fit and Be Cared For?
| yes |
Archaeology | Were there dinosaurs on Noah's Ark? | yes_statement | "dinosaurs" were on noah's ark.. noah's ark contained "dinosaurs". | https://www.latimes.com/nation/nationnow/la-na-noahs-ark-kentucky-20160705-snap-story.html | Noah's ark attraction, complete with dinosaurs in cages, ready to ... | Noah’s ark attraction, complete with dinosaurs in cages, ready to open in Kentucky
A 510-foot-long, $100 million Noah’s ark attraction built by Christians who say the biblical story really happened is ready to open in Kentucky this week.
Since its announcement in 2010, the ark project has rankled opponents who say the attraction will be detrimental to science education and shouldn’t have won state tax incentives.
“I believe this is going to be one of the greatest Christian outreaches of this era in history,” said Ken Ham, president of Answers in Genesis, the ministry that built the ark.
Advertisement
Ham said the massive ark, based on the tale of a man who got an end-of-the-world warning from God about a massive flood, will stand as proof that the stories of the Bible are true. The group invited media and thousands of supporters for a preview Tuesday, the first glimpse inside the giant, mostly wood structure.
1/5
Visitors outside the Ark Encounter attraction in Williamstown, Ky.
(John Minchillo / Associated Press)
2/5
A visitor looks into a cage containing a model dinosaur at Ark Encounter.
(John Minchillo / Associated Press)
3/5
Ken Ham speaks at a ribbon cutting at Ark Encounter.
(Aaron P. Bernstein / Getty Images)
4/5
Children look into a cage containing model dinosaurs at the Ark Encounter theme park.
“People are going to come from all over the world,” Ham said to thousands of people in front of the ark.
The ark will open to the public Thursday and Ham’s group has estimated it will draw 2 million visitors in its first year, putting it on par with some of the big-ticket attractions in nearby Cincinnati.
The group says the ark is built based on dimensions in the Bible. Inside are museum-style exhibits: displays of Noah’s family along with rows of cages containing animal replicas, including dinosaurs.
The group believes that God created everything about 6,000 years ago — man, dinosaur and everything else — so dinosaurs still would’ve been around at the time of Noah’s flood. Scientists say dinosaurs died out about 65 million years before man appeared.
Advertisement
An ark opponent who leads an atheist group called the Tri-State Freethinkers said the religious theme park will be unlike any other in the nation because of its rejection of science.
Basically, this boat is a church raising scientifically illiterate children and lying to them about science
— Jim Helton, Tri-State Freethinkers
“Basically, this boat is a church raising scientifically illiterate children and lying to them about science,” said Jim Helton, who lives about a half-hour from the ark.
Ham said the total cost of the ark surpassed $100 million, a far cry from a few years ago, when fundraising for the boat was sluggish and much larger theme park plans had to be scaled back.
Millions of people first learned about plans for the ark during a debate on evolution between TV’s Bill Nye “the Science Guy” and Ham in early 2014.
A few weeks later, a local bond issuance infused tens of millions of dollars into struggling fundraising efforts. And earlier this year, a federal judge ruled the ark could receive a Kentucky sales tax incentive worth up to $18 million while giving a strict religious test to its employees.
Months later, the tax incentive ruling still has some opponents of the boat scratching their heads.
“It’s a clear violation of separation of church and state. What they’re doing is utterly ridiculous and anywhere else, I don’t think it would be allowed,” Helton said.
The court ruled in January that Kentucky officials could not impose requirements on the ark that were not applied to other applicants for the tax incentive, which rebates a portion of the sales tax collected by the ark. That cleared the way for the group to seek out only Christians to fill its labor force. New applicants will be required to sign a statement saying they’re Christian and “profess Christ as their savior.”
Philip Steele, one of the thousands who got an early preview of the ark Tuesday, echoed Ham’s often repeated comment that the sales tax generated by the ark wouldn’t exist if the ark was never built.
“I just don’t think they understand it,” Steele said of the ark’s critics. “They’ll be able to keep a portion of [the sales tax] to further their ministry, but so be it.”
When Ham was asked about the tax incentive at the Tuesday event, he drew loud cheers when he proclaimed no taxpayer money was used to the build the ark.
I believe this is going to be one of the greatest Christian outreaches of this era in history.
— Ken Ham, president, Answers in Genesis
As much of a boon as the $18 million tax break would be, Bill Nye’s agreeing to debate Ham may have helped turn the tide of years of sluggish fundraising.
Nye, a high-profile science advocate and former TV personality, debated Ham on evolution and drew a huge online audience. Nye later said he didn’t realize the attention it would draw and said he was “heartbroken and sickened for the Commonwealth of Kentucky.”
The video of the debate posted by Answers in Genesis on YouTube has 5.4 million views.
About three weeks after the debate, Ham announced that a bond offering from the city of Williamstown had raised $62 million for the project, and a few months later Answers in Genesis was breaking ground at the site of the ark. | Noah’s ark attraction, complete with dinosaurs in cages, ready to open in Kentucky
A 510-foot-long, $100 million Noah’s ark attraction built by Christians who say the biblical story really happened is ready to open in Kentucky this week.
Since its announcement in 2010, the ark project has rankled opponents who say the attraction will be detrimental to science education and shouldn’t have won state tax incentives.
“I believe this is going to be one of the greatest Christian outreaches of this era in history,” said Ken Ham, president of Answers in Genesis, the ministry that built the ark.
Advertisement
Ham said the massive ark, based on the tale of a man who got an end-of-the-world warning from God about a massive flood, will stand as proof that the stories of the Bible are true. The group invited media and thousands of supporters for a preview Tuesday, the first glimpse inside the giant, mostly wood structure.
1/5
Visitors outside the Ark Encounter attraction in Williamstown, Ky.
(John Minchillo / Associated Press)
2/5
A visitor looks into a cage containing a model dinosaur at Ark Encounter.
(John Minchillo / Associated Press)
3/5
Ken Ham speaks at a ribbon cutting at Ark Encounter.
(Aaron P. Bernstein / Getty Images)
4/5
Children look into a cage containing model dinosaurs at the Ark Encounter theme park.
“People are going to come from all over the world,” Ham said to thousands of people in front of the ark.
The ark will open to the public Thursday and Ham’s group has estimated it will draw 2 million visitors in its first year, putting it on par with some of the big-ticket attractions in nearby Cincinnati.
The group says the ark is built based on dimensions in the Bible. Inside are museum-style exhibits: displays of Noah’s family along with rows of cages containing animal replicas, including dinosaurs.
| yes |