sentence1 stringlengths 1 133k | sentence2 stringlengths 1 131k |
|---|---|
Dear America: So Far From Home. She appeared in Season 2 of Robson Arms. Bertram was a high school teacher; in addition, she also taught young actors at Biz Studio in Vancouver. Bertram is married with a husband and two children. Awards Bertram has won two Gemini Awards for "Best Performance in a Children's or Youth Program or Series" for Ready or Not in 1995 and for "Best Performance in a Children's or Youth Program or Series" for Ready or Not in 1998. She has also been nominated for two Gemini Awards for "Best Performance in a Children's or Youth Program or Series" for Ready | children. Awards Bertram has won two Gemini Awards for "Best Performance in a Children's or Youth Program or Series" for Ready or Not in 1995 and for "Best Performance in a Children's or Youth Program or Series" for Ready or Not in 1998. She has also been nominated for two Gemini Awards for "Best Performance in a Children's or Youth Program or Series" for Ready or Not in 1996 and for Best Performance by an Actress in a Leading Role in a Dramatic Program or Mini-Series for |
is an American retired professional bodybuilder and actor. As a bodybuilder, Ferrigno won an IFBB Mr. America title and two consecutive IFBB Mr. Universe titles, and appeared in the bodybuilding documentary Pumping Iron. As an actor, he is best known for his title role in the CBS television series The Incredible Hulk and vocally reprising the role in subsequent animated and computer-generated incarnations. He has also appeared in European-produced fantasy-adventures such as Sinbad of the Seven Seas and Hercules, and as himself in the sitcom The King of Queens and the 2009 comedy I Love You, Man. Early life Ferrigno was born in Brooklyn, New York, to Victoria and Matt Ferrigno, a police lieutenant. He is of Italian descent. Soon after he was born, Ferrigno says he believes he suffered a series of ear infections and lost 75 to 80% of his hearing, though his condition was not diagnosed until he was three years old. Hearing loss and his speech impediment caused Ferrigno to be bullied by peers during his childhood who called him "deaf" and "mute". He began reading comic books such as Hulk and Spider-Man at this time, later saying "I was obsessed with power", and "I wanted to be strong enough so that I could be able to defend myself". Ferrigno started weight training at age 13, citing body builder and Hercules star Steve Reeves as one of his role models. Because he could not afford to buy weights, he made his own using a broomstick and pails which he partially filled with cement. He was also a fan of the Hercules films that starred Reeves. Ferrigno attended St. Athanasius Grammar School and Brooklyn Technical High School, where he learned metal working. Bodybuilding career After graduating from high school in 1969, Ferrigno won his first major title, IFBB Mr. America. Four years later, he won the title IFBB Mr. Universe. Early in his career he lived in Columbus, Ohio and trained with Arnold Schwarzenegger. In 1974, he came in second on his first attempt at the Mr. Olympia competition. He came in third the following year, and his attempt to beat Arnold Schwarzenegger was the subject of the 1977 documentary Pumping Iron. The documentary made Ferrigno famous. These victories, however, did not provide enough income for him to earn a living. His first paying job was as a $10-an-hour sheet metal worker in a Brooklyn factory, where he worked for three years. He did not enjoy the dangerous work, and left after a friend and co-worker accidentally cut off his own hand. Following this, Ferrigno left the competition circuit for many years, a period that included a brief stint as a defensive lineman for the Toronto Argonauts in the Canadian Football League. He had never played football, and was cut after two games. Ferrigno left the world of Canadian football after he broke the legs of a fellow player during a scrimmage. During competition, Ferrigno stood at almost . He weighed in at 268 lb (130 kg) in 1975, and 315 lb (142 kg) in 1992. Ferrigno competed in the first annual World's Strongest Man competition in 1977, where he finished fourth in a field of eight competitors. In the early 1990s, Ferrigno returned to bodybuilding, competing for the 1992 and 1993 Mr. Olympia titles. Finishing 12th and 10th, respectively, he then turned to the 1994 Masters Olympia, where his attempt to beat Robbie Robinson and Boyer Coe was the subject of the 1996 documentary Stand Tall. After this, he retired from competition. Acting career 1977–2008 In 1977, Ferrigno was cast as the Hulk in The Incredible Hulk. Despite the fact that they were rarely on camera together, Ferrigno and Bill Bixby – who played the Hulk's "normal" alter ego – became friends; Ferrigno has described Bixby as a "mentor" and "father figure" who took him under his wing. Ferrigno also singles out the instances in which Bixby directed Ferrigno in some episodes as particularly memorable. Ferrigno continued playing the Hulk role until 1981—although the last two episodes were not broadcast until May 1982. Later, he and Bixby co-starred in three The Incredible Hulk TV movies. In November 1978 and again in May 1979 Ferrigno appeared in Battle of the Network Stars. In 1983, Ferrigno appeared as John Six on the short-lived medical drama Trauma Center. Ferrigno played himself during intermittent guest appearances on the CBS sitcom The King of Queens, beginning in 2000 and continuing until the program's conclusion in 2007. He and his wife Carla were depicted as the main characters' next-door neighbors. Because of | Ferrigno was cast as the Hulk in The Incredible Hulk. Despite the fact that they were rarely on camera together, Ferrigno and Bill Bixby – who played the Hulk's "normal" alter ego – became friends; Ferrigno has described Bixby as a "mentor" and "father figure" who took him under his wing. Ferrigno also singles out the instances in which Bixby directed Ferrigno in some episodes as particularly memorable. Ferrigno continued playing the Hulk role until 1981—although the last two episodes were not broadcast until May 1982. Later, he and Bixby co-starred in three The Incredible Hulk TV movies. In November 1978 and again in May 1979 Ferrigno appeared in Battle of the Network Stars. In 1983, Ferrigno appeared as John Six on the short-lived medical drama Trauma Center. Ferrigno played himself during intermittent guest appearances on the CBS sitcom The King of Queens, beginning in 2000 and continuing until the program's conclusion in 2007. He and his wife Carla were depicted as the main characters' next-door neighbors. Because of his role as the title character on The Incredible Hulk, he is often the target of Hulk jokes by Doug and his friends. He made cameo appearances as a security guard in both the 2003 film Hulk and the 2008 film The Incredible Hulk, in which he also voiced the Hulk. In the latter film, Bruce Banner (Edward Norton) bribes him with a pizza in order to gain entry into a university building. He then went on to voice the Hulk in other Marvel Cinematic Universe films, uncredited. He continued to be known as the voice of the Hulk until 2015's Avengers: Age of Ultron. Ferrigno has since been replaced by Mark Ruffalo as the voice of Hulk in subsequent films. 2009–present He trained Michael Jackson on and off beginning in the early 1990s, and in 2009, he helped Jackson get into shape for a planned series of concerts in London, which were ultimately cancelled due to Jackson's untimely death. Ferrigno took part in a Smosh video, titled "I Love Lou Ferrigno", in which he is tracked down by one of Smosh's members, Anthony, in Hollywood. The skit ends with Ferrigno knocking Anthony unconscious, in response to Ian's claim that Anthony stole Ferrigno's Butterfinger. Ferrigno has his own line of fitness equipment called Ferrigno Fitness. In January 2009, he provided equipment to The Price Is Right for use as a One Bid prize, and demonstrated the equipment himself. In 2016, Ferrigno appeared as a playable Lego version of himself in Lego Marvel's Avengers. Non-acting endeavors In February 2006, Ferrigno was sworn in as a Los Angeles County, California, reserve sheriff's deputy, Level II. In November 2010, Maricopa County, Arizona sheriff Joe Arpaio swore in as a member of a volunteer sheriff posse, which also included actors Steven Seagal and Peter Lupus, in order to help control illegal immigration in the Phoenix Valley area. Ferrigno was a contestant on season five of the NBC reality television series The Celebrity Apprentice, which premiered in February 2012. He appeared on the program in order to raise money for his charity, the Muscular Dystrophy Association. Ferrigno was Team Unanimous' project manager for the task depicted in the fifth episode, "I'm Going to Mop the Floor With You," which was to create a viral video to promote O-Cedar's ProMist Spray Mop, placing him in competition with actress Tia Carrere, the project manager of the women's team, Forte. In addition to the usual $20,000 awarded to the charity of the project manager of the winning team, O-Cedar pledged an additional $30,000 for that task. Team Unanimous' video—in which Ferrigno appeared dancing while mopping—won the task, winning the $50,000 for Muscular Dystrophy Association. He was fired in episode nine, "Ad Hawk", which involved creating a 60-second commercial for Entertainment.com. In June 2012, Ferrigno was sworn in as a reserve deputy to the San Luis Obispo County, California, Sheriff's Department. There he completed his level I law enforcement academy, bringing his training up to full peace officer status. In September 2013, Ferrigno was sworn in as a special deputy to the Delaware County, Ohio, Sheriff's Department. In May 2018, President Donald Trump appointed Ferrigno to be a member of his Council on Sports, Fitness & Nutrition. Personal life Due to ear infections suffered soon after birth, Ferrigno lost 75 to 80% of his hearing and has been using hearing aids since the age of five. Ferrigno says his hearing loss helped shape his sense of determination in his youth, saying, "I think that if I wasn't hard of hearing I wouldn't be where I am now. Early on, as a youngster it was difficult, but I'm not ashamed to talk about it because many people have misconceptions about hearing loss; like who has hearing loss and what |
tsars of Russia. Imperialism In Imperialism, the Highest Stage of Capitalism (1916) Lenin's economic analyses indicated that capitalism would transform into a global financial system, by which industrialised countries exported financial capital to their colonies and so realise the exploitation of labour of the natives and the exploitation of the natural resources of their countries. That such superexploitation allows wealthy countries to maintain a domestic labour aristocracy with a slightly higher standard of living than the majority of workers, and so ensure peaceful labour–capital relations in the capitalist homeland. Therefore, a proletarian revolution of workers and peasants could not occur in capitalist countries whilst the imperialist global-finance system remained in place. The first proletarian revolution would have to occur in an under-developed country, such as Imperial Russia, which was the politically weakest country in the capitalist global-finance system in the early 20th century. In the United States of Europe Slogan (1915), Lenin wrote: In Left-Wing Communism: An Infantile Disorder (1920), Lenin wrote: Leninist praxis Vanguard party In Chapter II, "Proletarians and Communists", of The Communist Manifesto (1848), Marx and Engels present the communist party as the political vanguard solely qualified to lead the proletariat in revolution: The revolutionary purpose of the Leninist vanguard party is to establish the dictatorship of the proletariat with the support of the working class. The Communist Party would lead the popular deposition of the Tsarist government and then transfer power of government to the working class; that change of ruling class—from the bourgeoisie to the proletariat—makes possible the establishment of socialism. In What Is To Be Done? (1902), Lenin said that a revolutionary vanguard party, recruited from the working class, should lead the political campaign, because only in that way would the proletariat successfully realise their revolution; unlike the economic campaign of trade-union-struggle advocated by other socialist political parties and the anarcho-syndicalists. Like Marx, Lenin distinguished between the aspects of a revolution, the "economic campaign" (labour strikes for increased wages and work concessions) that featured diffused plural leadership; and the "political campaign" (socialist changes to society), which required the decisive, revolutionary leadership of the Bolshevik vanguard party. Democratic centralism Based upon the First International (IWA, International Workingmen's Association, 1864–1876), Lenin organised the Bolsheviks as a democratically centralised vanguard party, wherein free political-speech was recognised legitimate until policy consensus; afterwards, every member of the Party was expected to abide the agreed policy. Democratic debate was Bolshevik practice, even after Lenin banned factions among the Party in 1921. Despite being a guiding political influence, Lenin did not exercise absolute power, and continually debated to have his points of view accepted as a course of revolutionary action. In Freedom to Criticise and Unity of Action (1905), Lenin said: Proletarian revolution Before the October Revolution, despite supporting moderate political reform—including Bolsheviks elected to the Duma, when opportune—Lenin said that capitalism could only be overthrown with proletarian revolution, not with gradual reforms—from within (Fabianism) and from without (social democracy)—which would fail because the bourgeoisie's control of the means of production determined the nature of political power in Russia. As epitomised in the slogan "For a Democratic Dictatorship of the Proletariat and Peasantry," a proletarian revolution in underdeveloped Russia required a united proletariat (peasants and industrial workers) in order to successfully assume power of government in the cities. Moreover, owing to the middle-class aspirations of much of the peasantry, Leon Trotsky said that proletarian leadership of the revolution would ensure truly socialist and democratic socio-economic change. Dictatorship of the proletariat In Bolshevik Russia, government by direct democracy was realised and effected by the soviets (elected councils of workers) which Lenin said was the "democratic dictatorship of the proletariat" postulated in orthodox Marxism. The soviets comprised representative committees from the factories and the trade unions, but excluded the capitalist social class to ensure the establishment of a proletarian government, by and for the working class and the peasants. Concerning the political disenfranchisement of the capitalist social-class in Bolshevik Russia, Lenin said that "depriving the exploiters of the franchise is a purely Russian question, and not a question of the dictatorship of the proletariat, in general.… In which countries…democracy for the exploiters will be, in one or another form, restricted…is a question of the specific national features of this or that capitalism." In chapter five of The State and Revolution (1917), Lenin describes the dictatorship of the proletariat as: Concerning the disenfranchisement from democracy of the capitalist social class, Lenin said: "Democracy for the vast majority of the people, and suppression by force, i.e. exclusion from democracy, of the exploiters and oppressors of the people—this is the change democracy undergoes during the transition from capitalism to communism." The dictatorship of the proletariat was effected with soviet constitutionalism, a form of government opposite to the dictatorship of capital (privately owned means of production) practised in bourgeois democracies. Under soviet constitutionalism, the Leninist vanguard party would be one of many political parties competing for election to government power. Nevertheless, because of the Russian Civil War (1917–1924) and the anti-Bolshevik terrorism of opposing political parties aiding the White Armies' counter-revolution, the Bolshevik government banned all other political parties, which left the Leninist vanguard party as the sole, political party in Russia. Lenin said that such political suppression was not philosophically inherent to the dictatorship of the proletariat. Economics The Bolshevik government nationalised industry and established a foreign-trade monopoly to allow the productive co-ordination of the national economy, and so prevent Russian national industries from competing against each other. To feed the populaces of town and country, Lenin instituted War Communism (1918–1921) as a necessary condition – adequate supplies of food and weapons—for fighting the Russian Civil War. In March 1921, the New Economic Policy (NEP, 1921–1929) allowed limited, local capitalism (private commerce and internal free-trade) and replaced grain requisitions with an agricultural tax managed by state banks. The NEP meant to resolve food-shortage riots by the peasantry and allowed limited private enterprise; the profit motive that encouraged farmers to produce the crops required to feed town and country; and to economically re-establish the urban working class, who had lost many workers to fight the counter-revolutionary Civil War. The NEP nationalisation of the economy then would facilitate the industrialisation of Russia, politically strengthen the working class, and raise the standards of living for all Russians. Lenin said that the appearance of new socialist states was necessary to strengthening Russia's economy in the establishment of Russian socialism. Lenin's socio-economic perspective was supported by the German Revolution of 1918–1919, the Italian insurrection and general strikes of 1920, and worker wage-riots in the UK, France, and the US. National self-determination In recognising and accepting nationalism among oppressed peoples, Lenin advocated their national right to self-determination, and so opposed Russian chauvinism, because such ethnocentrism was a cultural obstacle to establishing the dictatorship of the proletariat in every territory of the deposed Russian Empire (1721–1917). In The Right of Nations to Self-determination (1914), Lenin said: The socialist internationalism of Marxism and Bolshevism is based upon class struggle and a peoples' transcending nationalism, ethnocentrism, and religion—the intellectual obstacles to progressive class consciousness—which are the cultural status quo that the capitalist ruling class manipulates in order to politically divide the working classes and the peasant classes. To overcome that barrier to establishing socialism, Lenin said that acknowledging nationalism, as a peoples' right of self-determination and right of secession, naturally would allow socialist states to transcend the political limitations of nationalism to form a federation. In The Question of Nationalities, or 'Autonomisation''' (1923), Lenin said: Socialist culture The role of the Leninist vanguard party was to politically educate the workers and peasants to dispel the societal false consciousness of religion and nationalism that constitute the cultural status quo taught by the bourgeoisie to the proletariat to facilitate their economic exploitation of peasant and worker. Influenced by Lenin, the Central Committee of the Bolshevik Party stated that the development of the socialist workers' culture should not be "hamstrung from above" and opposed the Proletkult (1917–1925) organisational control of the national culture. Leninism after 1924 Stalinism In post-Revolutionary Russia, Stalinism (socialism in one country) and Trotskyism (permanent world revolution) were the principal philosophies of communism that claimed legitimate ideological descent from Leninism, thus within the Communist Party, each ideological faction denied the political legitimacy of the opposing faction. Until shortly before his death, Lenin countered Stalin's disproportionate political influence in the Communist Party and in the bureaucracy of the Soviet government, partly because of abuses he had committed against the populace of Georgia and partly because the autocratic Stalin had accumulated administrative power disproportionate to his office of General Secretary of the Communist Party. The counter-action against Stalin aligned with Lenin's advocacy of the right of self-determination for the national and ethnic groups of the deposed Tsarist Empire. Lenin warned the Party that Stalin has "unlimited authority concentrated in his hands, and I am not sure whether he will always be capable of using that authority with sufficient caution", and formed a faction with Leon Trotsky to remove Stalin as the General Secretary of the Communist Party.Lenin, V.I. 1923-24 "Last Testament" Letters to the Congress, in Lenin Collected Works, Volume 36 pp. 593–611. Available online at Marxists.org. Retrieved 30 November 2011. To that end followed proposals reducing the administrative powers of party posts in order to reduce bureaucratic influence upon the policies of the Communist Party. Lenin advised Trotsky to emphasise Stalin's recent bureaucratic alignment in such matters (e.g. undermining the anti-bureaucratic workers' and peasants' Inspection) and argued to depose Stalin as General Secretary. Despite advice to refuse "any rotten compromise," Trotsky did not heed Lenin's advice and General Secretary Stalin retained power over the Communist Party and the bureaucracy of the Soviet government. Trotskyism After Lenin's death (21 January 1924), Trotsky ideologically battled the influence of Stalin, who formed ruling blocs within the Russian Communist Party (with Grigory Zinoviev and Lev Kamenev, then with Nikolai Bukharin and then by himself) and so determined soviet government policy from 1924 onwards. The ruling blocs continually denied Stalin's opponents the right to organise as an opposition faction within the party—thus the reinstatement of democratic centralism and free speech within the Communist Party were key arguments of Trotsky's Left Opposition and the later Joint Opposition. In the course of instituting government policy, Stalin promoted the doctrine of socialism in one country (adopted 1925), wherein the Soviet Union would establish socialism upon Russia's economic foundations (and support socialist revolutions elsewhere). Conversely, Trotsky held that socialism in one country would economically constrain the industrial development of the Soviet Union and thus required assistance from the new socialist countries in the developed world—which was essential for maintaining soviet democracy—in 1924 much undermined by the Russian Civil War of White Army counter-revolution. Trotsky's theory of permanent revolution proposed that socialist revolutions in underdeveloped countries would go further towards dismantling feudal régimes and | proposed that socialist revolutions in underdeveloped countries would go further towards dismantling feudal régimes and establish socialist democracies that would not pass through a capitalist stage of development and government. Hence, revolutionary workers should politically ally with peasant political organisations, but not with capitalist political parties. In contrast, Stalin and allies proposed that alliances with capitalist political parties were essential to realising a revolution where communists were too few. Said Stalinist practice failed, especially in the Northern Expedition portion of the Chinese Revolution (1926–1928), which resulted in the right-wing Kuomintang's massacre of the Chinese Communist Party. Despite the failure, Stalin's policy of mixed-ideology political alliances nonetheless became Comintern policy. Until exiled from Russia in 1929, Trotsky developed and led the Left Opposition (and the later Joint Opposition) with members of the Workers' Opposition, the Decembrists and (later) the Zinovievists. Trotskyism predominated the politics of the Left Opposition, which demanded the restoration of soviet democracy, the expansion of democratic centralism in the Communist Party, national industrialisation, international permanent revolution and socialist internationalism. The Trotskyist demands countered Stalin's political dominance of the Communist Party, which was officially characterised by the "cult of Lenin", the rejection of permanent revolution, and advocated the doctrine of socialism in one country. The Stalinist economic policy vacillated between appeasing the capitalist interests of the kulak in the countryside and destroying them as a social class. Initially, the Stalinists also rejected the national industrialisation of Russia, but then pursued it in full, sometimes brutally. In both cases, the Left Opposition denounced the regressive nature of Stalin's policy towards the wealthy kulak social class and the brutality of forced industrialisation. Trotsky described Stalinist vacillation as a symptom of the undemocratic nature of a ruling bureaucracy. During the 1920s and the 1930s, Stalin fought and defeated the political influence of Trotsky and the Trotskyists in Russia, by means of slander, antisemitism, and censorship, expulsions, exile (internal and external), and imprisonment. The anti–Trotsky campaign culminated in the executions (official and unofficial) of the Moscow Trials (1936–1938), which were part of the Great Purge of Old Bolsheviks who had led the Revolution).Rogovin, Vadim Z. Stalin's Terror of 1937-1938: Political Genocide in the USSR. (2009) translated to English by Frederick S. Choate, from the Russian-language Party of the Executed by Vadim Z. Rogovin. Analysis Some historians such as Richard Pipes consider Stalinism as the natural consequence of Leninism, that Stalin "faithfully implemented Lenin's domestic and foreign policy programs." Robert Service notes that "institutionally and ideologically Lenin laid the foundations for a Stalin [...] but the passage from Leninism to the worse terrors of Stalinism was not smooth and inevitable." Historian and Stalin biographer Edvard Radzinsky believes that Stalin was a real follower of Lenin, exactly as he claimed himself. Proponents of continuity cite a variety of contributory factors, in that it was Lenin, rather than Stalin, whose civil war measures introduced the Red Terror with its hostage-taking and internment camps; that it was Lenin who developed the infamous Article 58 and who established the autocratic system within the Russian Communist Party. Proponents also note that Lenin put a ban on factions within the party and introduced the one-party state in 1921, a move that enabled Stalin to get rid of his rivals easily after Lenin's death and cite Felix Dzerzhinsky, who exclaimed during the Bolshevik struggle against opponents in the Russian Civil War: "We stand for organized terror—this should be frankly stated." Opponents of this view include revisionist historians and a number of post-Cold War and otherwise dissident Soviet historians including Roy Medvedev, who argues that "one could list the various measures carried out by Stalin that were actually a continuation of anti-democratic trends and measures implemented under Lenin", but that "in so many ways, Stalin acted, not in line with Lenin's clear instructions, but in defiance of them." In doing so, some historians have tried to distance Stalinism from Leninism in order to undermine the totalitarian view that the negative facets of Stalin were inherent in communism from the start. Critics of this kind include anti-Stalinist communists such as Leon Trotsky, who pointed out that Lenin attempted to persuade the Russian Communist Party to remove Stalin from his post as its General Secretary. Lenin's Testament, the document which contained this order, was suppressed after Lenin's death. In his biography of Trotsky, British historian Isaac Deutscher says that, on being faced with the evidence, "only the blind and the deaf could be unaware of the contrast between Stalinism and Leninism." A similar analysis is present in more recent works such as those of Graeme Gill, who argues that "[Stalinism was] not a natural flow-on of earlier developments; [it formed a] sharp break resulting from conscious decisions by leading political actors." However, Gill notes that "difficulties with the use of the term reflect problems with the concept of Stalinism itself. The major difficulty is a lack of agreement about what should constitute Stalinism." Revisionist historians such as Sheila Fitzpatrick have criticized the focus upon the upper levels of society and the use of Cold War concepts such as totalitarianism which have obscured the reality of the system. As a form of Marxism, revolutionary Leninism was much criticised as an undemocratic interpretation of socialism. In The Nationalities Question in the Russian Revolution (1918), Rosa Luxemburg criticised the Bolsheviks for the suppression of the All Russian Constituent Assembly (January 1918); the partitioning of the feudal estates to the peasant communes; and the right of self-determination of every national people of the Russias. That the strategic (geopolitical) mistakes of the Bolsheviks would create great dangers for the Russian Revolution such as the bureaucratisation that would arise to administrate the oversized country that was Bolshevik Russia. In defense of the expedient revolutionary practice, in 'Left-Wing' Communism: An Infantile Disorder (1920) Lenin dismissed the political and ideological complaints of the anti-Bolshevik critics, who claimed ideologically correct stances that were to the political left of Lenin. In Marxist philosophy, left communism is a range of political perspectives that are left-wing among communists. Left communism criticizes the ideology that the Bolshevik Party practised as the revolutionary vanguard. Ideologically, left communists present their perspectives and approaches as authentic Marxism and thus more oriented to the proletariat than the Leninism of the Communist International at their first (1919) and second (1920) congresses. Proponents of left communism include Amadeo Bordiga, Herman Gorter, Paul Mattick, Sylvia Pankhurst, Antonie Pannekoek and Otto Rühle. Historically, the Dutch–German communist left has been most critical of Lenin and Leninism,Pannekoek, Anton. 1938. Lenin As Philosopher. yet the Italian communist left remained Leninist. Bordiga said: "All this work of demolishing opportunism and 'deviationism' (Lenin: What Is To Be Done?) is today the basis of party activity. The party follows revolutionary tradition and experiences in this work during these periods of revolutionary reflux and the proliferation of opportunist theories, which had as their violent and inflexible opponents Marx, Engels, Lenin, and the Italian Left." In The Lenin Legend (1935), Paul Mattick said that the council communist tradition, begun by the Dutch-German leftists, also is critical of Leninism. Contemporary left-communist organisations, such as the Internationalist Communist Tendency and the International Communist Current, view Lenin as an important and influential theorist, but remain critical of Leninism as political praxis for proletarian revolution."Lenin's Legacy". Nonetheless, the Bordigism of the International Communist Party abide Bordiga's strict Leninism. Ideologically aligned with the Dutch–German left, among the ideologists of contemporary communisation, the theorist Gilles Dauvé criticised Leninism as a "by-product of Kautskyism." In The Soviet Union Versus Socialism (1986), Noam Chomsky said that Stalinism was the logical development of Leninism and not an ideological deviation from Lenin's policies, which resulted in collectivisation enforced with a police state. In light of the tenets of socialism, Leninism was a right-wing deviation of Marxism. The vanguard-party revolution of Leninism became the ideological basis of the communist parties comprised by the socialist political spectrum. In the People's Republic of China, the Chinese Communist Party organised themselves with Maoism (the Thought of Mao Zedong), socialism with Chinese characteristics. In Singapore, the People's Action Party (PAP) featured internal democracy and initiated single-party dominance in the government and politics of Singapore. In the event, the practical application of Maoism to the socio-economic conditions of Third World countries produced revolutionary vanguard parties, such as the Communist Party of Peru – Red Fatherland. See also Anti-Leninism "He who does not work neither shall he eat" National delimitation in the Soviet Union Yellow socialism References Further reading Selected works by Vladimir Lenin The Development of Capitalism in Russia, 1899. What Is To Be Done? Burning Questions of Our Movement, 1902. The Three Sources and Three Component Parts of Marxism, 1913. The Right of Nations to Self-Determination, 1914. Imperialism, the Highest Stage of Capitalism, 1917. The State and Revolution, 1917. The Tasks of the Proletariat in the Present Revolution (The "April Theses"), 1917. "Left-Wing" Childishness and the Petty Bourgois Mentality, 1918. Left-Wing Communism: an Infantile Disorder, 1920. "Last Testament" Letters to the Congress, 1923–1924. Histories Isaac Deutscher. The Prophet Armed: Trotsky 1879–1921, 1954. Isaac Deutscher. The Prophet Unarmed: Trotsky 1921–1929, 1959. Moshe Lewin. Lenin's Last Struggle, 1969. Edward Hallett Carr. The Russian Revolution From Lenin to Stalin: 1917–1929'', 1979. Other authors External links Works by Vladimir Lenin What Is To Be Done?. Imperialism: The Highest Stage of Capitalism. The State and Revolution. "The Lenin Archive". "First Conference of the Communist International". |
was, until the establishment of the Bronze Star Medal in 1944, the only decoration below the Silver Star which could be awarded for combat valor, and the only decoration below the Distinguished Service Medal which could be awarded for meritorious noncombat service. After World War II After the establishment of the Bronze Star Medal (BSM) in February 1944, the Legion of Merit was awarded almost exclusively to senior officers in the rank of lieutenant colonel (Army, Marine Corps and Air Force) or commander (Navy and Coast Guard) (O-5), and above. Beginning in the 1980s, the Legion of Merit began to be awarded more frequently to senior-ranking warrant officers (W-4 and W-5), as well as to senior enlisted service members (E-8 and E-9), usually as a retirement award. When not awarded as a retirement award, it is most commonly awarded to officers in pay grade O-6 and higher. The Meritorious Service Medal (MSM) was established in 1969 as a "junior" version of the Legion of Merit and prior to 2003 was only awarded for non-combat service. The MSM is awarded more frequently, and to more lower-ranking military personnel, than the Legion of Merit. Recipients of the MSM are usually in pay grades E-7 thru E-9, W-3 thru W-5 (Army Only), and O-4 thru O-6 for the Army, Air Force, and Space Force; for the Navy, Marine Corps, and Coast Guard the MSM is usually presented to those in pay grades E-9, W-4, W-5, O-5 and O-6. Insignia The Chief Commander Degree of the Legion of Merit Medal is, on a wreath of green laurel joined at the bottom by a gold bow-knot (rosette), a domed five-pointed white star bordered crimson, points reversed with v-shaped extremities tipped with a gold ball. In the center, a blue disk encircled by gold clouds, with 13 white stars arranged in the pattern that appears on the Great Seal of the United States. Between each point, within the wreath are crossed arrows pointing outwards. The overall width is . The words "UNITED STATES OF AMERICA" are engraved in the center of the reverse. A miniature of the decoration in gold on a horizontal gold bar is worn on the service ribbon. The Commander Degree of the Legion of Merit Medal is, on a wreath of green laurel joined at the bottom by a gold bow-knot (rosette), a five-pointed white star bordered crimson, points reversed with v-shaped extremities tipped with a gold ball. In the center, a blue disk encircled by gold clouds, with 13 white stars arranged in the pattern that appears on the Great Seal of the United States. Between each star point, within the wreath, are crossed war arrows pointing outwards, representing armed protection to the Nation. The overall width is . A gold laurel wreath in the v-shaped angle at the top connects an oval suspension ring to the neck ribbon that is in width. The reverse of the five-pointed star is enameled in white, and the border is crimson. In the center, a disk for engraving the name of the recipient surrounded by the words "ANNUIT COEPTIS MDCCLXXXII": a combination of the motto from the Great Seal, "He [God] Has Favored Our Undertakings", with the date for the first award of a US decoration, the Purple Heart. An outer scroll contains the words "UNITED STATES OF AMERICA." A miniature of the decoration in silver on a horizontal silver bar is worn on the service ribbon. The neck ribbon for the degree of Commander is wide and consists of the following stripes: white 67101; center crimson and white. The Officer Degree of the Legion of Merit Medal is similar to the degree of Commander except the overall width is and the pendant has a suspension ring instead of the wreath for attaching the ribbon. A gold replica of the medal, wide, is centered on the suspension ribbon. The Legionnaire Degree of the Legion of Merit Medal and the Legion of Merit Medal issued to U.S. personnel is the same as the degree of Officer, except the suspension ribbon does not have the medal replica. The ribbon for all of the decorations is wide and consists of the following stripes: white; center crimson; and white. The reverse of all of the medals has the motto taken from the Great Seal of the United States, "ANNUIT COEPTIS" ("He [God] has favored our undertakings") and the date "MDCCLXXXII" (1782), which is the date of America's first decoration, the Badge of Military Merit, now known as the Purple Heart. The ribbon design also follows the pattern of the Purple Heart ribbon. Additional awards Additional awards of the Legion of Merit are denoted by oak leaf clusters (in the Army, Air Force, and Space Force), and by gold stars (in the Navy, Marine Corps, and Coast Guard). Until 2017, the sea services (the Navy, Marine Corps, and Coast Guard) awarded the Combat "V" for wear on the LOM. The Army, Air Force, and Space Force do not authorize the "V" device for the Legion of Merit. Notable recipients A few recipients are listed | with senior leadership/command positions or other senior positions of significant responsibility. The performance must have been such as to merit recognition of key individuals for service rendered in a clearly exceptional manner. Performance of duties normal to the grade, branch, specialty, or assignment, and experience of an individual is not an adequate basis for this award. For service not related to actual war, the term "key individual" applies to a narrower range of positions than in time of war and requires evidence of significant achievement. In peacetime, service should be in the nature of a special requirement or of an extremely difficult duty performed in an unprecedented and clearly exceptional manner. However, justification of the award may accrue by virtue of exceptionally meritorious service in a succession of important positions. The degrees and the design of the decoration were influenced by the French Legion of Honour (Légion d'honneur). History World War II Although recommendations for creation of a medal for meritorious service were initiated as early as September 1937, no formal action was taken toward approval. In a letter to the Quartermaster General (QMG) dated December 24, 1941, the Adjutant General formally requested action be initiated to create a meritorious service medal, and provide designs in the event the decoration was established. Proposed designs prepared by Bailey, Banks, and Biddle, and the Office of the Quartermaster General were provided to the Assistant Chief of Staff for Personnel (Colonel Heard) by the QMG on January 5, 1942. The Assistant Chief of Staff (G-1), Brigadier General John H. Hilldring, in a response to the QMG on April 3, 1942, indicated the Secretary of War had approved the design recommended by the QMG. The design of the Legion of Merit (change of name) would be ready for issue immediately after legislation authorizing it was enacted into law. (A separate medal called the Meritorious Service Medal was established in 1969.) An act of Congress (Public Law 671, 77th Congress, Chapter 508, 2d Session) on July 20, 1942, established the Legion of Merit and provided that the medal "shall have suitable appurtenances and devices and not more than four degrees, and which the President, under such rules and regulations as he shall prescribe, may award to (a) personnel of the Armed Forces of the United States and of the Government of the Commonwealth Philippines and (b) personnel of the armed forces of friendly foreign nations who, since the proclamation of an emergency by the President on 1939-09-08, shall have distinguished themselves by exceptionally meritorious conduct in the performance of outstanding services." The medal was announced in War Department Bulletin No. 40, dated August 5, 1942. Executive Order 9260, dated October 29, 1942, by President Franklin D. Roosevelt, established the rules for the Legion of Merit, and required the President's approval for the award. Following the invasion of North Africa in November 1942, a number of United States officers were awarded the Legion of Merit in the degree of Officer. One of the recipients was future Chairman of the Joint Chiefs of Staff Lyman Lemnitzer. Soon after, regulations for the award of the Legion of Merit were revised so that it would not be awarded in the degrees above Legionnaire to United States military personnel. The Legion of Merit is similar to the French Legion of Honor in both its design, a five-armed cross, and in that it is awarded in multiple degrees. Unlike the Legion of Honor, however, the Legion of Merit is only awarded to military personnel. In addition, it is the only award in the world with multiple degrees of which the higher degrees cannot be awarded to citizens of the country of the award's origin. In October 1942, Brazilian Army Brigadier General Amaro Soares Bittencourt became the first person awarded the Legion of Merit (Commander) and a week later, Lieutenant, junior grade Ann A. Bernatitus, a U.S. Navy Nurse Corps officer, became the first member of the United States Armed Forces and the first woman to receive the Legion of Merit. She received the award for her service during the defense of the Philippines. LTJG Bernatitus was also the first recipient of the Legion of Merit authorized to wear a Combat "V" with the medal. General Dwight D. Eisenhower was presented the Legion of Merit by President Roosevelt while he was en route to the Tehran Conference, in Cairo, Egypt, on November 26, 1943. In 1943, at the request of the Army Chief of Staff, General George C. Marshall, approval authority for U.S. personnel was delegated to the Department of War. Executive Order 10600, dated March 15, 1955, by President Dwight D. Eisenhower, again revised approval authority. Current provisions are contained in Title 10, United States Code 1121. As a result, awarding authority for the Legion of Merit resides with general officers/flag officers at the Lieutenant General / Vice Admiral level or higher. The U.S. Navy, Marine Corps, and Coast Guard, unlike the Army and later the Air Force, provided for the Legion of Merit to be awarded with a "V" device indicating awards for participation in combat operations. From 1942 to 1944, the Legion of Merit was awarded for a fairly wide range of achievements. This was because it was, |
detect an even wider variety of suspicious constructs. These include "warnings about syntax errors, uses of undeclared variables, calls to deprecated functions, spacing and formatting conventions, misuse of scope, implicit fallthrough in switch statements, missing license headers, [and]...dangerous language features". Lint-like tools are especially useful for dynamically typed languages like JavaScript and Python. Because the compilers of such languages typically do not enforce as many and as strict rules prior to execution, linter tools can also be used as simple debuggers for finding common errors (e.g. syntactic discrepancies) as well as hard-to-find errors such as heisenbugs (drawing attention to suspicious code as "possible errors"). Lint-like tools generally perform static analysis of source code. Lint-like tools have also been developed for other aspects of language, including grammar and style guides. Specialization Fortran Fortran compilers using space-squeezing techniques (e.g. IBM | functions, lint-like tools have also advanced their capabilities. For example, Gimpel's PC-Lint, introduced in 1985 and used to analyze C++ source code, is still for sale. Overview The analysis performed by lint-like tools can also be performed by an optimizing compiler, which aims to generate faster code. In his original 1978 paper, Johnson addressed this issue, concluding that "the general notion of having two programs is a good one" because they concentrated on different things, thereby allowing the programmer to "concentrate at one stage of the programming process solely on the algorithms, data structures, and correctness of the program, and then later retrofit, with the aid of lint, the desirable properties of universality and portability". Even though modern compilers have evolved to include many of lint's historical functions, lint-like tools have also evolved to detect an even wider variety of suspicious constructs. These include "warnings about syntax errors, uses of undeclared variables, calls to deprecated functions, spacing and formatting conventions, misuse of scope, implicit fallthrough in switch statements, missing license headers, [and]...dangerous language features". Lint-like tools are especially useful for dynamically typed languages like JavaScript and Python. Because the compilers of such languages typically do not enforce as many and as strict rules prior to execution, linter tools can also be used as simple debuggers for finding common errors (e.g. syntactic discrepancies) as well as hard-to-find errors such as heisenbugs (drawing attention to |
articles on battles. They may be organized alphabetically, by era, by conflict, by participants or location, or by death toll. See :Category:Battles for a complete list of articles on battles. Alphabetical list List of battles (alphabetical) Chronological By era List of battles before 301 List of battles 301–1300 List of battles 1301–1600 List of battles 1601–1800 List of battles 1801–1900 List of battles 1901–2000 List of battles since 2001 By conflict Viking Invasions (793–1066) Crusades (1096–1291) Mongol Conquests (1205–1312) Hundred Years' War (1337–1453) Italian Wars (1494-1559) Seven Years' War (1756–1763) American Revolutionary War (1775–1782) Napoleonic Wars (1799–1815) American Civil War (1861–1865) World War I (1914–1918) World War II (1939–1945) 2022 Russian invasion of Ukraine (2022) By participants or location Naval battles Naval battles Philippines By death toll List of battles by casualties List of battles and other violent events by death toll See also Lists of wars around the World (by date, region, type of conflict) Lists of wars and conflict by region Lists of battles (Orders) | since 2001 By conflict Viking Invasions (793–1066) Crusades (1096–1291) Mongol Conquests (1205–1312) Hundred Years' War (1337–1453) Italian Wars (1494-1559) Seven Years' War (1756–1763) American Revolutionary War (1775–1782) Napoleonic Wars (1799–1815) American Civil War (1861–1865) World War I (1914–1918) World War II (1939–1945) 2022 Russian invasion of Ukraine (2022) By participants or location Naval battles Naval battles Philippines By death toll List of battles by casualties List of battles and other violent events by death toll See also Lists of wars around the World (by date, region, type of conflict) Lists of wars and conflict by region Lists of battles (Orders) List of sieges List of active rebel groups List of rebel groups that control territory List of events named massacres List of number of conflicts per year The Fifteen Decisive Battles of the World List of most lethal battles in world history Africa : List of conflicts in Africa (Military history of Africa) List of modern conflicts in North Africa (Maghreb) Conflicts in the Horn of Africa (East region) Americas : List of conflicts in North America List of |
from the maternal blood supply. Because the newborn fish are large compared to the fry of oviparous fish, which are those that lay eggs, newborn fish of livebearers are easier to feed than the fry of egg-laying species, such as characins and cichlids. This makes them much easier to raise, and for this reason, aquarists often recommend them for beginning fish breeder hobbyists. The larger size of livebearer fry makes them far less vulnerable to predation, as the parents often eat fry if hungry. With sufficient cover in the way of plants or porous objects, they can sometimes mature in a community tank. Ovoviviparous and viviparous fish compared Most of the Poeciliidae are ovoviviparous, that is, while the eggs are retained inside the body of the female for protection, the eggs are essentially independent of the mother and she does not provide them with any nutrients. In contrast, fish such as splitfins and halfbeaks are viviparous, with the eggs receiving food from the maternal blood supply through structures analogous to the placenta of placental mammals. Aberrant livebearers and mouthbrooders Seahorses and pipefish can be defined as livebearers, although in these cases the males incubate the eggs rather than the females. In many cases, the eggs are dependent on the male for oxygen and | species, such as characins and cichlids. This makes them much easier to raise, and for this reason, aquarists often recommend them for beginning fish breeder hobbyists. The larger size of livebearer fry makes them far less vulnerable to predation, as the parents often eat fry if hungry. With sufficient cover in the way of plants or porous objects, they can sometimes mature in a community tank. Ovoviviparous and viviparous fish compared Most of the Poeciliidae are ovoviviparous, that is, while the eggs are retained inside the body of the female for protection, the eggs are essentially independent of the mother and she does not provide them with any nutrients. In contrast, fish such as splitfins and halfbeaks are viviparous, with the eggs receiving food from the maternal blood supply through structures analogous to the placenta of placental mammals. Aberrant livebearers and mouthbrooders Seahorses and pipefish can be defined as livebearers, although in these cases the males incubate the eggs rather than the females. In many cases, the eggs are dependent on the male for oxygen and nutrition, so these fish can be further defined as viviparous livebearers. Many cichlids are mouthbrooders, with the female (or more rarely the male) incubating the eggs in the mouth. Compared with other cichlids, these species produce fewer but bigger eggs, and when they emerge, the fry are |
Livia, on the Palatine. Archaeologists came across the 15-meter-deep cavity while working to restore the decaying palace. On 20 November 2007, the first set of photos were released showing the vault of the grotto which is encrusted with colourful mosaics, pumice stones and seashells. The center of the ceiling features a depiction of a white eagle, the symbol of the Roman Empire. Archaeologists are still searching for the grotto's entrance. Its location below Augustus' residence was thought to be significant; Octavian, before he became Augustus, had considered taking the name Romulus to indicate that he intended to found Rome anew. Opposing opinions Adriano La Regina (formerly Rome's archaeological superintendent 1976–2004, professor of Etruscology at Sapienza University of Rome), Professor Fausto Zevi (professor of Roman Archaeology at Rome's La Sapienza University) and Professor Henner von Hesberg (head of the German Archaeological Institute, Rome) denied | suckled them until they were rescued by the shepherd Faustulus. Luperci, the priests of Faunus, celebrated certain ceremonies of the Lupercalia at the cave, from the earliest days of the City until at least 494 AD. Modern discovery In January 2007, Italian archaeologist Irene Iacopi announced that she had probably found the legendary cave beneath the remains of Emperor Augustus's house, the Domus Livia, on the Palatine. Archaeologists came across the 15-meter-deep cavity while working to restore the decaying palace. On 20 November 2007, the first set of photos were released showing the vault of the grotto which is encrusted with colourful mosaics, pumice stones and seashells. The center of the ceiling features a depiction of a white eagle, the symbol of the Roman Empire. Archaeologists are still searching for the grotto's entrance. Its location below Augustus' residence was thought to be significant; Octavian, before he became Augustus, had considered taking the name Romulus to indicate that he intended to found Rome anew. |
unsatisfied with it, and directed a second video which paid tribute to tourmates like Primus, Deftones and Korn, who appeared in the video. Borland stated in an interview that George Michael, the writer of the song, hated the cover and "hates us for doing it". Significant Other (1999–2000) Following the radio success of "Faith", the band was determined to record the follow-up to its first album in order to show that they were not a Korn soundalike or a cover band; the band began writing an album which dealt with issues deriving from its newfound fame. Terry Date, who had produced albums for Pantera, White Zombie and Deftones, was chosen to produce the album. The band allowed Durst and Lethal to explore their hip hop origins by recording a song with Method Man. The song was originally titled "Shut the Fuck Up", but was retitled "N 2 Gether Now" for marketing purposes. Durst also recorded with Eminem, but the collaboration, "Turn Me Loose", was left off the album. The album also featured guest appearances by Stone Temple Pilots singer Scott Weiland, Korn's Jonathan Davis and Staind singer Aaron Lewis, and interludes by Les Claypool and Matt Pinfield. Significant Other saw Limp Bizkit reaching a new level of commercial success; the band was featured on the covers of popular music magazines, including Spin, and now found themselves repeatedly mobbed for autographs; the band was allowed to interact directly with its fans on a website established by Dike 99. Durst also moved from Jacksonville to Los Angeles. Significant Other was seen as an improvement over its debut, and was generally well received by critics, with mixed-to-positive reviews. However, the band also continued to be criticized by the media; an article profiling the band in Spin and discussing Significant Other claimed that "Limp Bizkit had yet to write a good song", and musicians Marilyn Manson and Trent Reznor criticized the band. The band promoted the album by playing unannounced concerts in Detroit and Chicago, as radio stations received a strong number of requests for the album's first single, "Nookie". Significant Other climbed to No. 1 on the Billboard 200, selling 643,874 copies in its first week of release. In its second week of release the album sold an additional 335,000 copies. On the opening night of the band's Limptropolis tour with Kid Rock, Sam Rivers smashed his bass in frustration over the venue's poor sound, cutting his hand. After his hand was stitched up at a hospital, Rivers returned to finish the set. On July 12, Durst allegedly kicked a security guard in the head during a performance in St. Paul, Minnesota, and was later arrested on assault charges. Further criticisms of the band appeared in Rolling Stone and The New York Times. New York Times writer Ann Powers wrote, "DJ Lethal used his turntables as a metal guitar, riffing expansively and going for effects instead of rhythm. John Otto on drums and Sam Rivers on bass never even tried to get funky, instead steering hip-hop's break-beat-based structure into a backbone for power chords. This makes for a hybrid that would be more interesting if the band did not constantly mire itself in boring tempos, and if Mr. Durst had any talent as a singer". In the summer of 1999, Limp Bizkit played at the highly anticipated Woodstock '99 show in front of approximately 200,000 people. Violent action sprang up during and after the band's performance, including fans tearing plywood from the walls during a performance of the song "Break Stuff". Several sexual assaults were reported in the aftermath of the concert. Durst stated during the concert, "People are getting hurt. Don't let anybody get hurt. But I don't think you should mellow out. That's what Alanis Morissette had you motherfuckers do. If someone falls, pick 'em up. We already let the negative energy out. Now we wanna let out the positive energy". Durst later stated in an interview, "I didn't see anybody getting hurt. You don't see that. When you're looking out on a sea of people and the stage is twenty feet in the air and you're performing, and you're feeling your music, how do they expect us to see something bad going on?" Les Claypool told The San Francisco Examiner, "Woodstock was just Durst being Durst. His attitude is 'no press is bad press', so he brings it on himself. He wallows in it. Still, he's a great guy." Durst saw the band as being scapegoated for the event's controversy, and reflected on the criticisms surrounding the band in his music video for the single "Re-Arranged", which depicted the band members receiving death sentences for their participation in the concerts. The video ended with angry witnesses watching as the band drowned in milk while performing the song. Durst later stated that the promoters of Woodstock '99 were at fault for booking his band, due to their reputation for raucous performances. Despite this controversy, Significant Other remained at No. 1 on the Billboard charts, and the band headlined the year's Family Values Tour. Durst directed a music video for "N 2 Gether Now" which featured Method Man and Pauly Shore, and was inspired by Inspector Clouseau's fights with his butler, Cato Fong, in the Pink Panther film series. Chocolate Starfish and the Hot Dog Flavored Water (2000–2001) In 2000, Durst announced that the band's third studio album would be titled Chocolate Starfish and the Hot Dog Flavored Water. The press thought he was joking about this title. The album title is intended to sound like a fictional band; the phrase "Chocolate Starfish" refers to the human anus, and Durst himself, who has frequently been called an "asshole". Borland contributed the other half of the album's title when the band was standing around at a truck stop, looking at bottles of flavored water, and Borland joked that the truck stop didn't have hot dog or meat-flavored water. In June 2000, Limp Bizkit performed at the WXRK Dysfunctional Family Picnic, but showed up an hour late for their set. An Interscope spokesman stated that there was confusion over the band's set time. During the band's performance, Durst criticized Creed singer Scott Stapp, calling him "an egomaniac". Creed's representatives later presented Durst with an autographed anger management manual. In the summer, Limp Bizkit's tour was sponsored by the controversial file sharing service Napster, doing free shows with a metal cage as the only thing separating them from the audience. Durst was an outspoken advocate of file sharing. They also did a "Guerrilla Tour" which involved the band setting up illegal and impromptu public gigs on rooftops and alleyways, some being shut down by the police. During the 2000 MTV Video Music Awards, Durst performed "Livin' It Up", a song from the upcoming album, as a duet with Christina Aguilera. In response to the performance, Filter frontman Richard Patrick was quoted as saying "Fred getting onstage with Christina Aguilera embarrassed us all." In response to the negative reactions to the performance, Durst remarked, "I already told you guys before, I did it all for the nookie, man." In response to Durst's remark, Aguilera commented, "He got no nookie." Released on October 17, Chocolate Starfish and the Hot Dog Flavored Water set a record for highest first-week sales for a rock album with over one million copies sold in the US in its first week of release. 400,000 of those sales happened during the first day, making it the fastest-selling rock album ever, breaking the record held for 7 years by Pearl Jam's Vs. Chocolate Starfish and the Hot Dog Flavored Water was certified Gold, Platinum and six times Multi-Platinum. The album received mixed reviews, with AllMusic's Stephen Thomas Erlewine writing, "Durst's self-pitying and the monotonous music give away that the band bashed Chocolate Starfish out very quickly – it's the sound of a band determined to deliver a sequel in a finite amount of time." Entertainment Weekly writer David Browne named it as the worst album title of 2000. During a 2001 tour of Australia at the Big Day Out festival in Sydney, fans rushed the stage in the mosh pit, and teenager Jessica Michalik died of asphyxiation. In court, Durst, represented by long-time attorney, Ed McPherson, testified he had warned the concert's organizers Aaron Jackson, Will Pearce and Amar Tailor, and also the promoter Vivian Lees, of the potential dangers of such minimal security. After viewing videotapes and hearing witness testimony, however, the coroner said it was evident that the density of the crowd was dangerous at the time Limp Bizkit took the stage, stating that Fred Durst should have acted more responsibly when the problem became apparent. Durst stated that he was "emotionally scarred" because of the teenager's death. Later in 2001, numerous hip-hop artists including P. Diddy, Timbaland, Bubba Sparxxx and Everlast remixed famous songs from the band into hip-hop versions adding their own styles and modifications. The album was called New Old Songs. Departure of Borland and Results May Vary (2001–2003) In October 2001, Durst released a statement on their website stating that "Limp Bizkit and Wes Borland have amicably decided to part ways. Both Limp Bizkit and Borland will continue to pursue their respective musical careers. Both wish each other the best of luck in all future endeavors." Durst also stated that the band would "comb the world for the illest guitar player known to man" to replace Borland. When asked why Borland quit the band, Ross Robinson stated that he quit because "He doesn't sell out for money anymore". After holding a nationwide audition for a new guitarist, called "Put Your Guitar Where Your Mouth Is", the band recorded with Snot guitarist Mike Smith, but later scrapped their recording sessions with Smith. Durst told a fan site that he had a falling-out with Smith, saying "We are the type of people that stay true to our family and our instincts and at any moment will act on intuition as a whole. Mike wasn't the guy. We had fun playing with him but always knew, in the back of our minds, that he wasn't where we needed him to be mentally." In May 2002, Durst posted Wes Borland's personal e-mail address online and told fans to ask him to rejoin the group. Borland stated that 75% or more of all the e-mails pleaded for him not to return to the band. After recording another album without Smith, the band scrapped the new sessions and assembled a new album combining songs from different sessions. During the album's production, the working title changed from Bipolar to Panty Sniffer, and was completed under the title Results May Vary. Under Durst's sole leadership, the album encompassed a variety of styles, and featured a cover of The Who's "Behind Blue Eyes", which differed from the original's arrangement in its inclusion of a Speak & Spell during the song's bridge. In the summer of 2003, Limp Bizkit participated on the Summer Sanitarium Tour, headlined by Metallica. At the tour's stop in Chicago, attendees of the concert threw items and heckled Durst from the moment he walked on stage. With the crowd chanting "Fuck Fred Durst" and continuing their assault on him, Durst threw the mic down after six songs and walked off stage, but not before heckling the crowd back. He repeatedly said, "Limp Bizkit are the best band in the world!" until a roadie took his microphone away. An article in the Sun-Times stated that the hostility was started by radio personality Mancow. Results May Vary was released on September 23, 2003, and received largely unfavorable reviews. AllMusic reviewer Stephen Thomas Erlewine panned the album, writing, "the music has no melody, hooks, or energy, [and] all attention is focused on the clown jumping up and down and screaming in front, and long before the record is over, you're left wondering, how the hell did he ever get to put this mess out?" The Guardian reviewer Caroline Sullivan wrote, "At least Limp Bizkit can't be accused of festering in the rap-rock ghetto ... But Durst's problems are ever-present – and does anybody still care?" Despite criticisms of the album, it was a commercial success, peaking at No. 3 on the Billboard 200. Borland's return, The Unquestionable Truth (Part 1) and hiatus (2004–2008) In August 2004, Borland rejoined Limp Bizkit, which began recording an EP, The Unquestionable Truth (Part 1). In May, The Unquestionable Truth (Part 1) was released. Sammy Siegler took over drumming duties for the band for much of the EP, which featured a more experimental sound, described by AllMusic writer Stephen Thomas Erlewine as "neo-prog alt-metal". At Durst's insistence, the album was released as an underground album, without any advertising or promotion. Borland disagreed with the decision, suggesting that it was "self-sabotage": "Maybe he was already unhappy with the music, and he didn't really want to put it out there." The EP received mixed reviews. Stephen Thomas Erlewine praised the music, calling it "a step in the right direction – it's more ambitious, dramatic, and aggressive, built on pummeling verses and stop-start choruses." However, he felt that the band was being "held back" by Durst, whom he called "the most singularly unpleasant, absurd frontman in rock." IGN writer Spence D. similarly gave it a mixed review, as he felt that the album lacked direction, but that showed potential for the band's musical growth. The Unquestionable Truth (Part 1) sold over 37,000 copies worldwide, peaking at No. 24 on the Billboard 200. Following the release of the band's Greatest Hitz album, the band went on hiatus. Borland stated that it was unlikely that a sequel to The Unquestionable Truth would be produced and that "As of right now, none of my future plans include Limp Bizkit." Reunion, Gold Cobra and departure from Interscope (2009–2011) In 2009, Limp Bizkit reunited with Borland playing guitar and launched the Unicorns N' Rainbows Tour. Durst announced that they had begun to record a new album, which Borland titled Gold Cobra. Borland said that the title does not have any meaning, and that it was chosen because it fit the style of music the band was writing for the album. The band recorded a spoken intro written by Durst and performed by Kiss member Gene Simmons for the album, but it was left off the completed album. The band also recorded additional "non-album" tracks, including "Combat Jazz", which featured rapper Raekwon and "Middle Finger", featuring Paul Wall. "Shotgun" was released as a single on May 17, 2011. The song is noted for featuring a guitar solo by Borland, something that the band is not known for. "Shotgun" received favorable reviews, with Artistdirect writing, "['Shotgun'] feels like Bizkit approached the signature style on Three Dollar Bill Y'All and Significant Other with another decade-plus of instrumental experience and virtuosity, carving out a banger that could get asses moving in the club or fists flying in the mosh pit." Gold Cobra was released on June 28 and received mixed to positive reviews. AllMusic's Stephen Thomas Erlewine called it "a return to the full-throttled attack of Three Dollar Bill Y'All. IGN writer Chad Grischow wrote, "Though far from their best work, Limp Bizkit's latest at least proves that their 2005 Greatest Hitz album may have been premature." Metal Hammer writer Terry Bezer appraised the album, writing "Aside from the odd duff moment, Gold Cobra throws out the hot shit that'll make you bounce in the mosh pit over and over again." The band launched the Gold Cobra Tour in support of the album. A music video for the title track was released. Gold Cobra sold nearly 80,000 copies in the United States alone and peaked at No. 16 on the Billboard 200; however, the band left Interscope after the album's release. Stampede of the Disco Elephants and Still Sucks (2012–present) In February 2012, the band returned to Australia for the first time in 11 years, to perform at the Soundwave festival. Durst dedicated the shows to Jessica Michalik, who died during the Limp Bizkit performance at Big Day Out 2001. Limp Bizkit signed with Cash Money Records. Following a dispute between Durst, Lethal and Otto about the latter two's alleged chronic drug and alcohol use, DJ Lethal angrily left the band. DJ Lethal later posted an apology to the band on Twitter, but was ultimately not allowed back into the band. Fred Durst was featured in the song "Champions" by Kevin Rudolf, used as theme for WWE's Night of Champions. The song debuted on WWE Raw on September 3, 2012. This was the first time Limp | Method Man. The song was originally titled "Shut the Fuck Up", but was retitled "N 2 Gether Now" for marketing purposes. Durst also recorded with Eminem, but the collaboration, "Turn Me Loose", was left off the album. The album also featured guest appearances by Stone Temple Pilots singer Scott Weiland, Korn's Jonathan Davis and Staind singer Aaron Lewis, and interludes by Les Claypool and Matt Pinfield. Significant Other saw Limp Bizkit reaching a new level of commercial success; the band was featured on the covers of popular music magazines, including Spin, and now found themselves repeatedly mobbed for autographs; the band was allowed to interact directly with its fans on a website established by Dike 99. Durst also moved from Jacksonville to Los Angeles. Significant Other was seen as an improvement over its debut, and was generally well received by critics, with mixed-to-positive reviews. However, the band also continued to be criticized by the media; an article profiling the band in Spin and discussing Significant Other claimed that "Limp Bizkit had yet to write a good song", and musicians Marilyn Manson and Trent Reznor criticized the band. The band promoted the album by playing unannounced concerts in Detroit and Chicago, as radio stations received a strong number of requests for the album's first single, "Nookie". Significant Other climbed to No. 1 on the Billboard 200, selling 643,874 copies in its first week of release. In its second week of release the album sold an additional 335,000 copies. On the opening night of the band's Limptropolis tour with Kid Rock, Sam Rivers smashed his bass in frustration over the venue's poor sound, cutting his hand. After his hand was stitched up at a hospital, Rivers returned to finish the set. On July 12, Durst allegedly kicked a security guard in the head during a performance in St. Paul, Minnesota, and was later arrested on assault charges. Further criticisms of the band appeared in Rolling Stone and The New York Times. New York Times writer Ann Powers wrote, "DJ Lethal used his turntables as a metal guitar, riffing expansively and going for effects instead of rhythm. John Otto on drums and Sam Rivers on bass never even tried to get funky, instead steering hip-hop's break-beat-based structure into a backbone for power chords. This makes for a hybrid that would be more interesting if the band did not constantly mire itself in boring tempos, and if Mr. Durst had any talent as a singer". In the summer of 1999, Limp Bizkit played at the highly anticipated Woodstock '99 show in front of approximately 200,000 people. Violent action sprang up during and after the band's performance, including fans tearing plywood from the walls during a performance of the song "Break Stuff". Several sexual assaults were reported in the aftermath of the concert. Durst stated during the concert, "People are getting hurt. Don't let anybody get hurt. But I don't think you should mellow out. That's what Alanis Morissette had you motherfuckers do. If someone falls, pick 'em up. We already let the negative energy out. Now we wanna let out the positive energy". Durst later stated in an interview, "I didn't see anybody getting hurt. You don't see that. When you're looking out on a sea of people and the stage is twenty feet in the air and you're performing, and you're feeling your music, how do they expect us to see something bad going on?" Les Claypool told The San Francisco Examiner, "Woodstock was just Durst being Durst. His attitude is 'no press is bad press', so he brings it on himself. He wallows in it. Still, he's a great guy." Durst saw the band as being scapegoated for the event's controversy, and reflected on the criticisms surrounding the band in his music video for the single "Re-Arranged", which depicted the band members receiving death sentences for their participation in the concerts. The video ended with angry witnesses watching as the band drowned in milk while performing the song. Durst later stated that the promoters of Woodstock '99 were at fault for booking his band, due to their reputation for raucous performances. Despite this controversy, Significant Other remained at No. 1 on the Billboard charts, and the band headlined the year's Family Values Tour. Durst directed a music video for "N 2 Gether Now" which featured Method Man and Pauly Shore, and was inspired by Inspector Clouseau's fights with his butler, Cato Fong, in the Pink Panther film series. Chocolate Starfish and the Hot Dog Flavored Water (2000–2001) In 2000, Durst announced that the band's third studio album would be titled Chocolate Starfish and the Hot Dog Flavored Water. The press thought he was joking about this title. The album title is intended to sound like a fictional band; the phrase "Chocolate Starfish" refers to the human anus, and Durst himself, who has frequently been called an "asshole". Borland contributed the other half of the album's title when the band was standing around at a truck stop, looking at bottles of flavored water, and Borland joked that the truck stop didn't have hot dog or meat-flavored water. In June 2000, Limp Bizkit performed at the WXRK Dysfunctional Family Picnic, but showed up an hour late for their set. An Interscope spokesman stated that there was confusion over the band's set time. During the band's performance, Durst criticized Creed singer Scott Stapp, calling him "an egomaniac". Creed's representatives later presented Durst with an autographed anger management manual. In the summer, Limp Bizkit's tour was sponsored by the controversial file sharing service Napster, doing free shows with a metal cage as the only thing separating them from the audience. Durst was an outspoken advocate of file sharing. They also did a "Guerrilla Tour" which involved the band setting up illegal and impromptu public gigs on rooftops and alleyways, some being shut down by the police. During the 2000 MTV Video Music Awards, Durst performed "Livin' It Up", a song from the upcoming album, as a duet with Christina Aguilera. In response to the performance, Filter frontman Richard Patrick was quoted as saying "Fred getting onstage with Christina Aguilera embarrassed us all." In response to the negative reactions to the performance, Durst remarked, "I already told you guys before, I did it all for the nookie, man." In response to Durst's remark, Aguilera commented, "He got no nookie." Released on October 17, Chocolate Starfish and the Hot Dog Flavored Water set a record for highest first-week sales for a rock album with over one million copies sold in the US in its first week of release. 400,000 of those sales happened during the first day, making it the fastest-selling rock album ever, breaking the record held for 7 years by Pearl Jam's Vs. Chocolate Starfish and the Hot Dog Flavored Water was certified Gold, Platinum and six times Multi-Platinum. The album received mixed reviews, with AllMusic's Stephen Thomas Erlewine writing, "Durst's self-pitying and the monotonous music give away that the band bashed Chocolate Starfish out very quickly – it's the sound of a band determined to deliver a sequel in a finite amount of time." Entertainment Weekly writer David Browne named it as the worst album title of 2000. During a 2001 tour of Australia at the Big Day Out festival in Sydney, fans rushed the stage in the mosh pit, and teenager Jessica Michalik died of asphyxiation. In court, Durst, represented by long-time attorney, Ed McPherson, testified he had warned the concert's organizers Aaron Jackson, Will Pearce and Amar Tailor, and also the promoter Vivian Lees, of the potential dangers of such minimal security. After viewing videotapes and hearing witness testimony, however, the coroner said it was evident that the density of the crowd was dangerous at the time Limp Bizkit took the stage, stating that Fred Durst should have acted more responsibly when the problem became apparent. Durst stated that he was "emotionally scarred" because of the teenager's death. Later in 2001, numerous hip-hop artists including P. Diddy, Timbaland, Bubba Sparxxx and Everlast remixed famous songs from the band into hip-hop versions adding their own styles and modifications. The album was called New Old Songs. Departure of Borland and Results May Vary (2001–2003) In October 2001, Durst released a statement on their website stating that "Limp Bizkit and Wes Borland have amicably decided to part ways. Both Limp Bizkit and Borland will continue to pursue their respective musical careers. Both wish each other the best of luck in all future endeavors." Durst also stated that the band would "comb the world for the illest guitar player known to man" to replace Borland. When asked why Borland quit the band, Ross Robinson stated that he quit because "He doesn't sell out for money anymore". After holding a nationwide audition for a new guitarist, called "Put Your Guitar Where Your Mouth Is", the band recorded with Snot guitarist Mike Smith, but later scrapped their recording sessions with Smith. Durst told a fan site that he had a falling-out with Smith, saying "We are the type of people that stay true to our family and our instincts and at any moment will act on intuition as a whole. Mike wasn't the guy. We had fun playing with him but always knew, in the back of our minds, that he wasn't where we needed him to be mentally." In May 2002, Durst posted Wes Borland's personal e-mail address online and told fans to ask him to rejoin the group. Borland stated that 75% or more of all the e-mails pleaded for him not to return to the band. After recording another album without Smith, the band scrapped the new sessions and assembled a new album combining songs from different sessions. During the album's production, the working title changed from Bipolar to Panty Sniffer, and was completed under the title Results May Vary. Under Durst's sole leadership, the album encompassed a variety of styles, and featured a cover of The Who's "Behind Blue Eyes", which differed from the original's arrangement in its inclusion of a Speak & Spell during the song's bridge. In the summer of 2003, Limp Bizkit participated on the Summer Sanitarium Tour, headlined by Metallica. At the tour's stop in Chicago, attendees of the concert threw items and heckled Durst from the moment he walked on stage. With the crowd chanting "Fuck Fred Durst" and continuing their assault on him, Durst threw the mic down after six songs and walked off stage, but not before heckling the crowd back. He repeatedly said, "Limp Bizkit are the best band in the world!" until a roadie took his microphone away. An article in the Sun-Times stated that the hostility was started by radio personality Mancow. Results May Vary was released on September 23, 2003, and received largely unfavorable reviews. AllMusic reviewer Stephen Thomas Erlewine panned the album, writing, "the music has no melody, hooks, or energy, [and] all attention is focused on the clown jumping up and down and screaming in front, and long before the record is over, you're left wondering, how the hell did he ever get to put this mess out?" The Guardian reviewer Caroline Sullivan wrote, "At least Limp Bizkit can't be accused of festering in the rap-rock ghetto ... But Durst's problems are ever-present – and does anybody still care?" Despite criticisms of the album, it was a commercial success, peaking at No. 3 on the Billboard 200. Borland's return, The Unquestionable Truth (Part 1) and hiatus (2004–2008) In August 2004, Borland rejoined Limp Bizkit, which began recording an EP, The Unquestionable Truth (Part 1). In May, The Unquestionable Truth (Part 1) was released. Sammy Siegler took over drumming duties for the band for much of the EP, which featured a more experimental sound, described by AllMusic writer Stephen Thomas Erlewine as "neo-prog alt-metal". At Durst's insistence, the album was released as an underground album, without any advertising or promotion. Borland disagreed with the decision, suggesting that it was "self-sabotage": "Maybe he was already unhappy with the music, and he didn't really want to put it out there." The EP received mixed reviews. Stephen Thomas Erlewine praised the music, calling it "a step in the right direction – it's more ambitious, dramatic, and aggressive, built on pummeling verses and stop-start choruses." However, he felt that the band was being "held back" by Durst, whom he called "the most singularly unpleasant, absurd frontman in rock." IGN writer Spence D. similarly gave it a mixed review, as he felt that the album lacked direction, but that showed potential for the band's musical growth. The Unquestionable Truth (Part 1) sold over 37,000 copies worldwide, peaking at No. 24 on the Billboard 200. Following the release of the band's Greatest Hitz album, the band went on hiatus. Borland stated that it was unlikely that a sequel to The Unquestionable Truth would be produced and that "As of right now, none of my future plans include Limp Bizkit." Reunion, Gold Cobra and departure from Interscope (2009–2011) In 2009, Limp Bizkit reunited with Borland playing guitar and launched the Unicorns N' Rainbows Tour. Durst announced that they had begun to record a new album, which Borland titled Gold Cobra. Borland said that the title does not have any meaning, and that it was chosen because it fit the style of music the band was writing for the album. The band recorded a spoken intro written by Durst and performed by Kiss member Gene Simmons for the album, but it was left off the completed album. The band also recorded additional "non-album" tracks, including "Combat Jazz", which featured rapper Raekwon and "Middle Finger", featuring Paul Wall. "Shotgun" was released as a single on May 17, 2011. The song is noted for featuring a guitar solo by Borland, something that the band is not known for. "Shotgun" received favorable reviews, with Artistdirect writing, "['Shotgun'] feels like Bizkit approached the signature style on Three Dollar Bill Y'All and Significant Other with another decade-plus of instrumental experience and virtuosity, carving out a banger that could get asses moving in the club or fists flying in the mosh pit." Gold Cobra was released on June 28 and received mixed to positive reviews. AllMusic's Stephen Thomas Erlewine called it "a return to the full-throttled attack of Three Dollar Bill Y'All. IGN writer Chad Grischow wrote, "Though far from their best work, Limp Bizkit's latest at least proves that their 2005 Greatest Hitz album may have been premature." Metal Hammer writer Terry Bezer appraised the album, writing "Aside from the odd duff moment, Gold Cobra throws out the hot shit that'll make you bounce in the mosh pit over and over again." The band launched the Gold Cobra Tour in support of the album. A music video for the title track was released. Gold Cobra sold nearly 80,000 copies in the United States alone and peaked at No. 16 on the Billboard 200; however, the band left Interscope after the album's release. Stampede of the Disco Elephants and Still Sucks (2012–present) In February 2012, the band returned to Australia for the first time in 11 years, to perform at the Soundwave festival. Durst dedicated the shows to Jessica Michalik, who died during the Limp Bizkit performance at Big Day Out 2001. Limp Bizkit signed with Cash Money Records. Following a dispute between Durst, Lethal and Otto about the latter two's alleged chronic drug and alcohol use, DJ Lethal angrily left the band. DJ Lethal later posted an apology to the band on Twitter, but was ultimately not allowed back into the band. Fred Durst was featured in the song "Champions" by Kevin Rudolf, used as theme for WWE's Night of Champions. The song debuted on WWE Raw on September 3, 2012. This was the first time Limp Bizkit has worked with WWE since 2003. The band recorded their seventh studio album, Stampede of the Disco Elephants with producer Ross Robinson, who also produced the band's debut album, Three Dollar Bill, Yall, and their 2005 album The Unquestionable Truth (Part 1). On March 24, 2013, the first single from the album, "Ready to Go" (featuring Lil Wayne) was released on limpbizkit.com. In November a cover of the Ministry song "Thieves" was released by the band via their official Facebook and Twitter accounts. In December, the band released the previously leaked song, "Lightz" along with an accompanying music video. The next single off the album, "Endless Slaughter" was set to be released only on cassette and during concerts, but can be downloaded at the band's official website. In October 2014, Fred Durst revealed that the band had left Cash Money, and became independent again. The split was carried out amicably, and Fred says that "We really love the jam we did with Lil Wayne, though. We love that song." Limp Bizkit performed as headliners of the ShipRocked 2015 cruise from February 2 to 6. Other bands present were Chevelle, Black Label Society, P.O.D., Sevendust among others. The band announced their major 2015 tour called "Money Sucks", a Russian 20-date tour to take place during October and November, celebrating the 20th anniversary of Limp Bizkit. The tour name was a nod to the difficult economic situation that Russia was facing at the time. Before the band traveled to Europe to attend the "Money Sucks" Tour, Sam Rivers was diagnosed with a degenerative disease of the discs of the spine and that was complicated due to a pinched nerve, causing a lot of pain in such area, which prevented him from being with the band. 23-year-old German bassist Samuel Gerhard Mpungu replaced Rivers for the tour. Limp Bizkit offered several concerts in the United Kingdom during winter 2016 alongside Korn. Regarding this tour Fred says: "You may have experienced a lot of cool concerts in your life, but I can guarantee you that an evening with Korn and Limp Bizkit will always and forever be your favorite. No one brings the party harder, heavier, and more exciting than us. No one. And ... make sure you get plenty of rest the night before. It's time to bring it back!" Because of little information and constant delays for the release of Stampede, in an interview/talk with the podcast "Someone Who Isn't Me", Wes said that Fred "isn't happy" with what he was working on. |
Well-developed travertine-processing industry, especially in the Ausoni-Tiburtina area (Tivoli and Guidonia Montecelio quarries). ca. 70% of the national sanitary ceramics comes from Civita Castellana industrial district and Gaeta textile (Valle del Liri). In the district the production relationships are mostly of the subcontractor type, 40% of the companies produce semi-finished and finished products not intended for marketing. There are some R&D activity in high technology: IBM (IBM Rome Software Lab), Ericcson, Leonardo Electronics (Rome-Tiburtina, Rome-Laurentina, Pomezia, Latina), Rheinmetall ("Radar House") and tire industry: Bridgestone (R&D center in Rome and proving grounds in Aprilia). Consumer goods The most distinctive industry in Lazio is production of household chemicals, pharmaceutical and hygiene goods, toilet paper and tissue products: Sigma-Tau, Johnson & Johnson, Procter & Gamble, Colgate Palmolive, Henkel, Pfizer, Abott, Catalent, Angelini, Menarini, Biopharma, Wepa. Space Avio in Colleferro has headquarters and make research, development and manufacturing of solid propellant motors and liquid propellant engines for launch vehicles and tactical propulsion systems; boosters for Ariane 5 rocket Satellite services are provided from Telespazio which headquarters in Rome Thales Alenia Space has 2 locations in Rome (Tiburtina and Saccomuro) and makes design and integration of terrestrial observation, navigation and telecommunications satellites Agriculture From fruits the most important are kiwifruit (1st place in Italy) and hazel nuts "Nocciola romana". Italy itself is the second largest producer of kiwifruit worldwide and was surpassed only by China. Infrastructure which has been used for grape growing was easily adapted for kiwifruit cultivation. Animal husbandry Only sheep and buffalo herds are significant nationwide. Both keep dominantly for milk, which using to production Pecorino Romano and Mozzarella di Buffalo cheese. Sheep herds is the 3rd nationwide after Sardinia and Sicily. 40% of sheep are breeding in province of Viterbo. Viticulture Vineyards cover in Lazio. 90% of wines are white. In production of quality wine Lazio has rank 14 of 20 with 190.557 hl. There are 3 DOCG wines: Frascati Superiore Cannellino di Frascati Cesanese del Piglio Unemployment The unemployment rate stood at 9.1% in 2020. Demographics With a population of 5,864,321 million, Lazio is the second most populated region of Italy. The overall population density in the region is 341 inhabitants per km2. However, the population density widely ranges from almost 800 inhabitants per km2 in the highly urbanized Rome metropolitan area to less than 60 inhabitants per km2 in the mountainous and rural Province of Rieti. As of January 2010, the Italian national institute of statistics ISTAT estimated that 497,940 foreign-born immigrants live in Lazio, equal to 8.8% of the total regional population. Government and politics Rome is center-left politically oriented by tradition, while the rest of Lazio is center-right oriented. In the 2008 general election, Lazio gave 44.2% of its vote to the centre-right coalition, while the centre-left block took 41.4% of vote. In the 2013 general election, Lazio gave 40.7% of its vote to the center-left block coalition, 29.3% to the center-right coalition and 20.2 to the Five Star Movement. Administrative divisions Lazio is divided into four provinces and one metropolitan (province-level) city: Cuisine One of the most famous forms of food in Lazio is pasta. Dishes invented in the region include: Guanciale is used in several sauces. Guanciale is the cut of pork obtained from the cheek of the pig, crossed by lean veins of muscle with a component of valuable fat, of a composition different from lardo (back fat) and pancetta (belly fat): the consistency is harder than pancetta and it possesses a more distinctive flavor. Guanciale is salted pork fat, different from bacon, which is smoked. It is a typical product of Lazio, Umbria and Abruzzo. Another important ingredient is Pecorino Romano cheese. Vegetables are common, artichokes (carciofi) being among the most popular: Other popular vegetables are romanesco broccoli, asparagus, fava bean, cima di rapa, romaine lettuce, pumpkin, zucchini and chicory. Spices In the cuisine of Lazio, spices are widely used. Among the most used are lesser cat-mint, also called "Nepetella" (for artichokes and mushrooms), squaw mint, also called "Poleggio" (for lamb and tripe), laurel, rosemary, sage, juniper, chili and truffle. Quinto quarto Although Roman and Lazio cuisine use cheap ingredients like vegetable and pasta, poor people needed a source of protein. Therefore, they used the so-called "Quinto quarto" (The fifth quarter), leftovers from animal carcasses that remained after the sale of prized parts to the wealthy. Quinto quarto includes tripe (the most valuable part of reticulum, also called "cuffia", "l'omaso" or "lampredotto"), kidneys (which need to be soaked for a long time in water with lemon to remove urine smell), heart, liver, spleen, sweetbreads (pancreas, thymus and salivary glands), brain, tongue, ox tail, trotters and pajata (intestines of calf, fed only with its mother's milk). The intestines are cleaned and skinned but the chyme (mass of partly digested food) is left inside. Typical dishes of this style are: Meat dishes Traditional meat dishes include Saltimbocca alla Romana (veal wrapped with Prosciutto di Parma and sage and cooked in white wine, butter and flour) and Abbacchio alla Romana (roasted lamb with garlic, rosemary, pepper and chopped prosciutto). Sports The region gives name to professional football club | east, Campania to the south, and the Tyrrhenian Sea to the west. The region is mainly flat, with small mountainous areas in the most eastern and southern districts. The coast of Lazio is mainly composed of sandy beaches, punctuated by the headlands of Circeo (541 m) and Gaeta (171 m). The Pontine Islands, which are part of Lazio, are off Lazio's southern coast. Behind the coastal strip, to the north, lies the Maremma Laziale (the continuation of the Tuscan Maremma), a coastal plain interrupted at Civitavecchia by the Tolfa Mountains (616 m). The central section of the region is occupied by the Roman Campagna, a vast alluvial plain surrounding the city of Rome, with an area of approximately . The southern districts are characterized by the flatlands of Agro Pontino, a once swampy and malarial area, that was reclaimed over the centuries. The Preapennines of Latium, marked by the Tiber valley and the Liri with the Sacco tributary, include on the right of the Tiber, three groups of mountains of volcanic origin: the Volsini, Cimini and Sabatini, whose largest former craters are occupied by the Bolsena, Vico and Bracciano lakes. To the south of the Tiber, other mountain groups form part of the Preapennines: the Alban Hills, also of volcanic origin, and the calcareous Lepini, Ausoni and Aurunci Mountains. The Apennines of Latium are a continuation of the Apennines of Abruzzo: the Reatini Mountains with Terminillo (2,213 m), Mounts Sabini, Prenestini, Simbruini and Ernici which continue east of the Liri into the Mainarde Mountains. The highest peak is Mount Gorzano (2,458 m) on the border with Abruzzo. History The Italian word Lazio descends from the Latin word Latium, the region of the Latins, Latini in the Latin language spoken by them and passed on to the Latin city-state of Ancient Rome. Although the demography of ancient Rome was multi-ethnic, including, for example, Etruscans, Sabines and other Italics besides the Latini, the latter were the dominant constituent. In Roman mythology, the tribe of the Latini took their name from King Latinus. Apart from the mythical derivation of Lazio given by the ancients as the place where Saturn, ruler of the golden age in Latium, hid (latuisset) from Jupiter there, a major modern etymology is that Lazio comes from the Latin word "latus", meaning "wide", expressing the idea of "flat land" meaning the Roman Campagna. Much of Lazio is in fact flat or rolling. The lands originally inhabited by the Latini were extended into the territories of the Samnites, the Marsi, the Hernici, the Aequi, the Aurunci and the Volsci, all surrounding Italic tribes. This larger territory was still called Latium, but it was divided into Latium adiectum or Latium Novum, the added lands or New Latium, and Latium Vetus, or Old Latium, the older, smaller region. The northern border of Lazio was the Tiber river, which divided it from Etruria. The emperor Augustus officially united almost all of present-day Italy into a single geo-political entity, Italia, dividing it into eleven regions. The part of today's Lazio south of the Tiber river – together with the present region of Campania immediately to the southeast of Lazio and the seat of Neapolis – became Region I (Latium et Campania), while modern Upper Lazio became part of Regio VII - Etruria, and today's Province of Rieti joined Regio IV - Samnium. After the Gothic conquest of Italy at the end of the fifth century, modern Lazio became part of the Ostrogothic Kingdom, but after the Gothic War between 535 and 554 and conquest by the Byzantine Empire, the region became the property of the Eastern Emperor as the Duchy of Rome. However, the long wars against the Longobards weakened the region. With the Donation of Sutri in 728, the Pope acquired the first territory in the region beyond the Duchy of Rome. The strengthening of the religious and ecclesiastical aristocracy led to continuous power struggles between secular lords (Baroni) and the Pope until the middle of the 16th century. Innocent III tried to strengthen his own territorial power, wishing to assert his authority in the provincial administrations of Tuscia, Campagna and Marittima through the Church's representatives, in order to reduce the power of the Colonna family. Other popes tried to do the same. During the period when the papacy resided in Avignon, France (1309–1377), the feudal lords' power increased due to the absence of the Pope from Rome. Small communes, and Rome above all, opposed the lords' increasing power, and with Cola di Rienzo, they tried to present themselves as antagonists of the ecclesiastical power. However, between 1353 and 1367, the papacy regained control of Lazio and the rest of the Papal States. From the middle of the 16th century, the papacy politically unified Lazio with the Papal States, so that these territories became provincial administrations of St. Peter's estate; governors in Viterbo, in Marittima and Campagna, and in Frosinone administered them for the papacy. Lazio was part of the short-lived Roman Republic, after which it became a puppet state of the First French Republic under the forces of Napoleon Bonaparte. Lazio was returned to the Papal States in October 1799. In 1809, it was annexed to the French Empire under the name of the Department of Tibre, but returned to the Pope's control in 1815. On 20 September 1870 the capture of Rome, during the reign of Pope Pius IX, and France's defeat at Sedan, completed Italian unification, and Lazio was incorporated into the Kingdom of Italy. In 1927 the territory of the Province of Rieti, belonging to Umbria and Abruzzo, joined Lazio. Towns in Lazio were devastated by the 2016 Central Italy earthquake. Economy Agriculture, crafts, animal husbandry and fishery are the main traditional sources of income. Agriculture is characterized by the cultivation of wine grapes, fruit, vegetables and olives. Lazio is the main growing region |
to Los Alamos, New Mexico. Los Alamos may also refer to: Establishments Los Alamos National Laboratory Los Alamos Laboratory, also known as Project Y, the war-time laboratory in Los Alamos, New Mexico. Los Alamos | Museum Los Alamos Ranch School, boys' school Geographic Locations Los Alamos, California Los Álamos, Chile Los Alamos, New Mexico Los Alamos County, New Mexico Cañada de los Alamos, New Mexico Media Los Alamos, a novel by Joseph Kanon Los Alamos, a book |
villain, his "sinister" features overshadowing his acting skills. After suffering serious injuries in a car crash, Van Cleef had begun to lose interest in his declining career by the time Sergio Leone gave him a major role in For a Few Dollars More. The film made him a box-office draw, especially in Europe. Youth Van Cleef, born of partial Dutch, English and German ancestry on January 9, 1925, in Somerville, New Jersey, was the son of Marion Lavinia Van Fleet and Clarence LeRoy Van Cleef. At age 17, he obtained his high school diploma early in his senior year at Somerville High School in order to enlist in the United States Navy in September 1942. Military service After basic training and further training at the Naval Fleet Sound School, Van Cleef was assigned to a submarine chaser and then to a minesweeper, , on which he worked as a sonarman. The ship initially patrolled the Caribbean, then moved to the Mediterranean, participating in the landings in southern France. In January 1945, Incredible moved to the Black Sea, and performed sweeping duties out of the Soviet Navy base at Sevastopol, Crimea. Afterwards, the ship performed air-sea rescue patrols in the Black Sea before returning to Palermo, Sicily. By the time of his discharge in March 1946, he had achieved the rank of Sonarman First Class (SO1) and had earned his mine sweeper patch. He also had been awarded the Bronze Star and the Good Conduct Medal. By virtue of his deployments, Van Cleef also qualified for the European-African-Middle Eastern Campaign Medal, Asiatic-Pacific Campaign Medal, the American Campaign Medal, and the World War II Victory Medal. He was discharged from the Navy in 1946. Early acting career After leaving the Navy, Van Cleef read for a part in Our Town at the Little Theater Group in Clinton, New Jersey and received his first stage role. From there, he continued to meet with the group and audition for parts. The next biggest part was that of the boxer, Joe Pendleton, in the play Heaven Can Wait. During this time, he was observed by visiting talent scouts, who were impressed by Van Cleef's stage presence and delivery. One of these scouts later took him to New York City talent agent Maynard Morris of the MCA agency, who then sent him to the Alvin Theater for an audition. The play was Mister Roberts. Van Cleef's screen debut came in High Noon. During a performance of Mister Roberts in Los Angeles, he was noticed by film producer Stanley Kramer, who offered Van Cleef a role in his upcoming film. Kramer originally wanted Van Cleef for the role of the deputy Harvey Pell, but as he wanted Van Cleef to have his "distinctive nose" fixed, Van Cleef declined the role in favor of the part of the silent gunslinger Jack Colby. He was then cast mostly in villainous roles, due to his sharp cheeks and chin, piercing eyes, and hawk-like nose, from the part of Tony Romano in Kansas City Confidential (1952), culminating 14 years later in Sergio Leone's The Good, the Bad and the Ugly (1966). Aside from Westerns and the science fiction films, three of his early major roles were in noir films, Kansas City Confidential (1952), Vice Squad (1953) and The Big Combo (1955). Van Cleef appeared six times between 1951 and 1955 on the children's syndicated Western series The Adventures of Kit Carson, starring Bill Williams. He was cast three times, including the role of Rocky Hatch in the episode "Greed Rides the Range" (1952), of another syndicated Western series, The Range Rider. In 1952, he was cast in the episode "Formula for Fear" of the Western aviation series Sky King. He appeared in episode 82 of the TV series The Lone Ranger in 1952. In 1954, Van Cleef appeared as Jesse James in the syndicated series Stories of the Century. In 1955, he was cast twice on another syndicated Western series, Annie Oakley. That same year, he guest-starred on the CBS Western series, Brave Eagle. In 1955, he played one of the two villains in an episode of | he played one of the two villains in an episode of The Adventures of Champion the Wonder Horse. In 1958, he was cast as Ed Murdock, a rodeo performer trying to reclaim the title in the event at Madison Square Garden in New York City, on Richard Diamond, Private Detective. Van Cleef played different characters on four episodes of ABC's The Rifleman, with Chuck Connors, between 1959 and 1962, and twice on ABC's Tombstone Territory. In 1958, he was cast as Deputy Sid Carver in the episode "The Great Stagecoach Robbery" of another syndicated Western series, Frontier Doctor, starring Rex Allen. In 1959, Van Cleef appeared as Luke Clagg in the episode "Strange Request" of the NBC Western series Riverboat starring Darren McGavin, as Jumbo Kane in the episode "The Hostage" on the CBS Western series "Wanted Dead or Alive" starring Steve McQueen, and in an episode of Maverick titled "Red Dog" in 1960 starring Roger Moore and John Carradine. Van Cleef played a sentry on an episode of the ABC sitcom The Real McCoys, with Walter Brennan. Van Cleef was cast with Pippa Scott and again with Chuck Connors in the 1960 episode "Trial by Fear" of the CBS anthology series The DuPont Show with June Allyson. A young Van Cleef also made an appearance on The Andy Griffith Show and as Frank Diamond in The Untouchables, in an episode entitled "The Unhired Assassin". He also appeared in an episode of the ABC/Warner Brothers Western series The Alaskans. Van Cleef guest-starred on the CBS Western series Have Gun – Will Travel, on the ABC/Warner Bros. series Colt .45, on the NBC Western series Cimarron City and Laramie, and on Rod Cameron's syndicated crime dramas City Detective and State Trooper. He guest-starred in an episode of John Bromfield's syndicated crime drama Sheriff of Cochise. Van Cleef starred as minor villains and henchmen in various Westerns, including The Tin Star and Gunfight at the O.K. Corral. His film characters died in many of his Westerns and gangster portrayals. In 1960, he appeared as a villainous swindler in the Bonanza episode, "The Bloodline" (December 31, 1960) and also made an appearance on Gunsmoke. In 1961, he played a role on episode 7 ("The Grave") of the third season of The Twilight Zone. He played a villainous henchman of Lee Marvin's title character in the 1962 John Ford movie The Man Who Shot Liberty Valance. In 1963, he appeared on Perry Mason (episode: "The Case of the Golden Oranges"). That same year, he appeared in "The Day of the Misfits" on The Travels of Jaimie McPheeters. Stardom In 1965, Sergio Leone cast Van Cleef, whose career had yet to take off, as a main protagonist alongside Clint Eastwood in For a Few Dollars More. Leone then chose Van Cleef to appear again with Eastwood, this time as the primary antagonist, Angel Eyes, in the now seminal Western The Good, the Bad and the Ugly (1966). With his roles in Leone's films, Van Cleef became a major star of Spaghetti Westerns, playing central, and often surprisingly heroic, roles in films such as The Big Gundown (1966), Death Rides a Horse (1967), Day of Anger (1967), and The Grand Duel (1972). He played the title role in Sabata (1969) and Return of Sabata (1971), and co-starred with Jim Brown in an Italian-American co-production, Take a Hard Ride (1975). In his final two westerns he co-starred with Leif Garrett in God's Gun (1976) and Kid Vengeance (1977), both of which were filmed mainly in Israel. Van Cleef later had a supporting role in John Carpenter's cult film Escape from New York (1981). He slipped out of the limelight in his later years. In 1984, he was cast as a ninja master in the NBC adventure series The Master, but it was canceled after thirteen episodes. In all, Van Cleef is credited with 90 movie roles and 109 television appearances over a 38-year span. Personal life Van Cleef was married three times. His first marriage was to Patsy Ruth Kahle, in 1943. They had three children, Alan, Deborah and David, and divorced in 1958. His second marriage was to Joan Marjorie Drane, from 1960 to 1974. His final marriage was to Barbara Havelone in 1976, who survived him. He lost the last joint of the middle finger of his right hand while building a playhouse for his daughter. In 1958, a severe car crash nearly cost Van Cleef his life and career. A resulting knee injury made his physicians think that he would never ride a horse again. This injury plagued Van Cleef for the rest of his life and caused him great pain. His recovery was long and difficult and halted his acting for a time. He then began a business in interior decoration with second wife Joan, as well as pursuing his talent for painting, primarily of sea and landscapes. Death Despite suffering from heart disease from the late 1970s and having a pacemaker installed in the early 1980s, Van Cleef continued to work in films until his death on December 16, 1989, at age 64. He collapsed in his home in Oxnard, California, from a heart attack. Throat cancer was listed as a secondary cause of death. He was buried at Forest Lawn Memorial Park Cemetery, Hollywood Hills, California, with an inscription on his grave marker referring to his many acting performances as a villain: "BEST OF THE BAD". Filmography Film Television In popular culture Lee Van Cleef's characters in the Sergio Leone movies inspired the creation of the characters Elliot Belt of the Lucky Luke comic album The Bounty Hunter, and Cad Bane of the Star Wars franchise. The band Primus has a song about Lee Van Cleef on their album Green Naugahyde. Guitarist and ex-Guns N' Roses member Ron Thal recorded an instrumental piece titled "The Legend of Van Cleef". The Warcraft universe features the villain Edwin Van Cleef, inspired by Lee Van Cleef. The Black Library magazine Inferno! featured |
high school education. He also began reading omnivorously, focusing, above all, on 19th-century Italian poets such as Giosuè Carducci and Arturo Graf. He then started writing his first poems and fell in love with his cousin Lina. During this period, the first signs of serious differences arose between Luigi and his father; Luigi had discovered some notes revealing the existence of Stefano's extramarital relations. As a reaction to the ever-increasing distrust and disharmony that Luigi was developing toward his father, a man of a robust physique and crude manners, his attachment to his mother would continue growing to the point of profound veneration. This later expressed itself, after her death, in the moving pages of the novella Colloqui con i personaggi in 1915. His romantic feelings for his cousin, initially looked upon with disfavour, were suddenly taken very seriously by Lina's family. They demanded that Luigi abandon his studies and dedicate himself to the sulphur business so that he could immediately marry her. In 1886, during a vacation from school, Luigi went to visit the sulphur mines of Porto Empedocle and started working with his father. This experience was essential to him and would provide the basis for such stories as Il Fumo, Ciàula scopre la Luna as well as some of the descriptions and background in the novel The Old and the Young. The marriage, which had seemed imminent, was postponed. Pirandello then registered at the University of Palermo in the departments of Law and of Letters. The campus at Palermo, and above all the Department of Law, was the centre in those years of the vast movement which would eventually evolve into the Fasci Siciliani. Although Pirandello was not an active member of this movement, he had close ties of friendship with its leading ideologists: Rosario Garibaldi Bosco, Enrico La Loggia, Giuseppe De Felice Giuffrida and Francesco De Luca. Higher education In 1887, having definitively chosen the Department of Letters, he moved to Rome in order to continue his studies. But the encounter with the city, centre of the struggle for unification to which the families of his parents had participated with generous enthusiasm, was disappointing and nothing close to what he had expected. "When I arrived in Rome it was raining hard, it was night time and I felt like my heart was being crushed, but then I laughed like a man in the throes of desperation." Pirandello, who was an extremely sensitive moralist, finally had a chance to see for himself the irreducible decadence of the so-called heroes of the Risorgimento in the person of his uncle Rocco, now a greying and exhausted functionary of the prefecture who provided him with temporary lodgings in Rome. The "desperate laugh",the only manifestation of revenge for the disappointment undergone, inspired the bitter verses of his first collection of poems, Mal Giocondo (1889). But not all was negative; this first visit to Rome provided him with the opportunity to assiduously visit the many theatres of the capital: Il Nazionale, Il Valle, il Manzoni. "Oh the dramatic theatre! I will conquer it. I cannot enter into one without experiencing a strange sensation, an excitement of the blood through all my veins..." Because of a conflict with a Latin professor, he was forced to leave the University of Rome and went to Bonn with a letter of presentation from one of his other professors. The stay in Bonn, which lasted two years, was fervid with cultural life. He read the German romantics, Jean Paul, Tieck, Chamisso, Heinrich Heine and Goethe. He began translating the Roman Elegies of Goethe, composed the Elegie Boreali in imitation of the style of the Roman Elegies, and he began to meditate on the topic of humorism by way of the works of Cecco Angiolieri. In March 1891 he received his doctorate in Romance Philology with a dissertation on the dialect of Agrigento: Sounds and Developments of Sounds in the Speech of Craperallis. Marriage After a brief sojourn in Sicily, during which the planned marriage with his cousin was finally called off, he returned to Rome, where he became friends with a group of writer-journalists including Ugo Fleres, Tomaso Gnoli, Giustino Ferri and Luigi Capuana. Capuana encouraged Pirandello to dedicate himself to narrative writing. In 1893 he wrote his first important work, Marta Ajala, which was published in 1901 as l'Esclusa. In 1894 he published his first collection of short stories, Amori senza Amore. He married in 1894 as well. Following his father's suggestion he married a shy, withdrawn girl of a good family of Agrigentine origin educated by the nuns of San Vincenzo: Maria Antonietta Portulano. The first years of matrimony brought on in him a new fervour for his studies and writings: his encounters with his friends and the discussions on art continued, more vivacious and stimulating than ever, while his family life, despite the complete incomprehension of his wife with respect to the artistic vocation of her husband, proceeded relatively tranquilly with the birth of two sons (Stefano and Fausto) and a daughter (Rosalia "Lietta"). In the meantime, Pirandello intensified his collaborations with newspaper editors and other journalists in magazines such as La Critica and La Tavola Rotonda in which he published, in 1895, the first part of the Dialoghi tra Il Gran Me e Il Piccolo Me. In 1897 he accepted an offer to teach Italian at the Istituto Superiore di Magistero di Roma, and in the magazine Marzocco he published several more pages of the Dialoghi. In 1898, with Italo Falbo and Ugo Fleres, he founded the weekly Ariel, in which he published the one-act play L'Epilogo (later changed to La Morsa) and some novellas (La Scelta, Se...). The end of the 19th century and the beginnings of the 20th were a period of extreme productivity for Pirandello. In 1900, he published in Marzocco some of the most celebrated of his novellas (Lumie di Sicilia, La Paura del Sonno...) and, in 1901, the collection of poems Zampogna. In 1902 he published the first series of Beffe della Morte e della Vita and his second novel, Il Turno. Family disaster The year 1903 was fundamental to the life of Pirandello. The flooding of the sulphur mines of Aragona, in which his father Stefano had invested not only an enormous amount of his own capital but also Antonietta's dowry, precipitated the financial collapse of the family. Antonietta, after opening and reading the letter announcing the catastrophe, entered into a state of semi-catatonia and underwent such a psychological shock that her mental balance remained profoundly and irremediably shaken. Pirandello, who had initially harboured thoughts of suicide, attempted to remedy the situation as best he could by increasing the number of his lessons in both Italian and German and asking for compensation from the magazines to which he had freely given away his writings and collaborations. In the magazine New Anthology, directed by G. Cena, meanwhile, the novel which Pirandello had been writing while in this horrible situation (watching over his mentally ill wife at night after an entire day spent at work) began appearing in episodes. The title was Il Fu Mattia Pascal (The Late Mattia Pascal). This novel contains many autobiographical elements that have been fantastically re-elaborated. It was an immediate and resounding success. Translated into German in 1905, this novel paved the way to the notoriety and fame which allowed Pirandello to publish with the more important firms such as Treves, with whom he published, in 1906, another collection of novellas Erma Bifronte. In 1908 he published a volume of essays entitled Arte e Scienza and the important essay L'Umorismo, in which he initiated the legendary debate with Benedetto Croce that would continue with increasing bitterness and venom on both sides for many years. In 1909 the first part of I Vecchi e I Giovani was published in episodes. This novel retraces the history of the failure and repression of the Fasci Siciliani in the period from 1893 to 1894. When the novel came out in 1913 Pirandello sent a copy of it to his parents for their fiftieth wedding anniversary along with a dedication which said that "their names, Stefano and Caterina, live heroically." However, while the mother is transfigured in the novel into the otherworldly figure of Caterina Laurentano, the father, represented by the husband of Caterina, Stefano Auriti, appears only in memories and flashbacks, since, as was acutely observed by Leonardo Sciascia, "he died censured in a Freudian sense by his son who, in the bottom of his soul, is his enemy." Also in 1909, Pirandello began his collaboration with the prestigious journal Corriere della Sera in which he published the novellas Mondo di Carta (World of Paper), La Giara, and, in 1910, Non è una cosa seria and Pensaci, Giacomino! (Think it over, Giacomino!) At this point Pirandello's fame as a writer was continually increasing. His private life, however, was poisoned by the suspicion and obsessive jealousy of Antonietta who began turning physically violent. In 1911, while the publication of novellas and short stories continued, Pirandello finished his fourth novel, Suo Marito, republished posthumously (1941), and completely revised in the first four chapters, with the title Giustino Roncella nato Boggiòlo. During his life the author never republished this novel for reasons of discretion; within are implicit references to the writer Grazia Deledda. But the work which absorbed most of his energies at this time was the collection of stories La Vendetta del Cane, Quando s'è capito il giuoco, Il treno ha fischiato, Filo d'aria and Berecche e la guerra. They were all published from 1913 to 1914 and are all now considered classics of Italian literature. First World War As Italy entered the First World War, Pirandello's son Stefano volunteered for service and was taken prisoner by the Austro-Hungarians. In 1916 the actor Angelo Musco successfully recited the three-act comedy that the writer had extracted from the novella Pensaci, Giacomino! and the pastoral comedy Liolà. In 1917 the collection of novellas E domani Lunedì (And Tomorrow, Monday...) was published, but the year was mostly marked by important theatrical representations: Così è (se vi pare) (Right you are (if you think so)), A birrita cu' i ciancianeddi and Il Piacere dell'onestà (The Pleasure Of Honesty). A year later, Ma non è una cosa seria (But It's Nothing Serious) and Il Gioco delle parti (The Game of Roles) were all produced on stage. Pirandello's son Stefano returned home when the war ended. In 1919 Pirandello had his wife placed in an asylum. The separation from his wife, despite her morbid jealousies and hallucinations, caused great suffering for Pirandello who, even as late as 1924, believed he could still properly care for her at home. She never left the asylum. 1920 was the year of comedies such as Tutto per bene, Come prima meglio di prima, and La Signora Morli. In 1921, the Compagnia di Dario Niccodemi staged, at the Valle di Roma, the play, Sei personaggi in cerca d'autore, Six Characters in Search of an Author. It was a clamorous failure. The public divided into supporters and adversaries, the latter of whom shouted, "Asylum, Asylum!" The author, who was present at the performance with his daughter Lietta, left through a side exit to avoid the crowd of enemies. The same drama, however, was a great success when presented in Milan. In 1922 in Milan, Enrico IV was performed for the first time and was acclaimed universally as a success. Pirandello's international reputation was developing as well. The Sei personaggi was performed in London and New York. Italy under the Fascists In 1924 Pirandello wrote a letter to Benito Mussolini asking him to be accepted as a member of the National Fascist Party. In 1925, Pirandello, with the help of Mussolini, assumed the artistic direction and ownership of the Teatro d'Arte di Roma, founded by the Gruppo degli Undici. He described himself as "a Fascist because I am Italian." For his devotion to Mussolini, the satirical magazine Il Becco Giallo used to call him P. Randello (randello in Italian means cudgel). He expressed publicly apolitical belief, saying "I'm apolitical, I'm only a man in the world." He had continuous conflicts with fascist leaders. In 1927 he tore his fascist membership card to pieces in front of the startled secretary-general of the Fascist Party. For the remainder of his life, Pirandello was always under close surveillance by the secret fascist police OVRA. His play, The Giants of | such as La Critica and La Tavola Rotonda in which he published, in 1895, the first part of the Dialoghi tra Il Gran Me e Il Piccolo Me. In 1897 he accepted an offer to teach Italian at the Istituto Superiore di Magistero di Roma, and in the magazine Marzocco he published several more pages of the Dialoghi. In 1898, with Italo Falbo and Ugo Fleres, he founded the weekly Ariel, in which he published the one-act play L'Epilogo (later changed to La Morsa) and some novellas (La Scelta, Se...). The end of the 19th century and the beginnings of the 20th were a period of extreme productivity for Pirandello. In 1900, he published in Marzocco some of the most celebrated of his novellas (Lumie di Sicilia, La Paura del Sonno...) and, in 1901, the collection of poems Zampogna. In 1902 he published the first series of Beffe della Morte e della Vita and his second novel, Il Turno. Family disaster The year 1903 was fundamental to the life of Pirandello. The flooding of the sulphur mines of Aragona, in which his father Stefano had invested not only an enormous amount of his own capital but also Antonietta's dowry, precipitated the financial collapse of the family. Antonietta, after opening and reading the letter announcing the catastrophe, entered into a state of semi-catatonia and underwent such a psychological shock that her mental balance remained profoundly and irremediably shaken. Pirandello, who had initially harboured thoughts of suicide, attempted to remedy the situation as best he could by increasing the number of his lessons in both Italian and German and asking for compensation from the magazines to which he had freely given away his writings and collaborations. In the magazine New Anthology, directed by G. Cena, meanwhile, the novel which Pirandello had been writing while in this horrible situation (watching over his mentally ill wife at night after an entire day spent at work) began appearing in episodes. The title was Il Fu Mattia Pascal (The Late Mattia Pascal). This novel contains many autobiographical elements that have been fantastically re-elaborated. It was an immediate and resounding success. Translated into German in 1905, this novel paved the way to the notoriety and fame which allowed Pirandello to publish with the more important firms such as Treves, with whom he published, in 1906, another collection of novellas Erma Bifronte. In 1908 he published a volume of essays entitled Arte e Scienza and the important essay L'Umorismo, in which he initiated the legendary debate with Benedetto Croce that would continue with increasing bitterness and venom on both sides for many years. In 1909 the first part of I Vecchi e I Giovani was published in episodes. This novel retraces the history of the failure and repression of the Fasci Siciliani in the period from 1893 to 1894. When the novel came out in 1913 Pirandello sent a copy of it to his parents for their fiftieth wedding anniversary along with a dedication which said that "their names, Stefano and Caterina, live heroically." However, while the mother is transfigured in the novel into the otherworldly figure of Caterina Laurentano, the father, represented by the husband of Caterina, Stefano Auriti, appears only in memories and flashbacks, since, as was acutely observed by Leonardo Sciascia, "he died censured in a Freudian sense by his son who, in the bottom of his soul, is his enemy." Also in 1909, Pirandello began his collaboration with the prestigious journal Corriere della Sera in which he published the novellas Mondo di Carta (World of Paper), La Giara, and, in 1910, Non è una cosa seria and Pensaci, Giacomino! (Think it over, Giacomino!) At this point Pirandello's fame as a writer was continually increasing. His private life, however, was poisoned by the suspicion and obsessive jealousy of Antonietta who began turning physically violent. In 1911, while the publication of novellas and short stories continued, Pirandello finished his fourth novel, Suo Marito, republished posthumously (1941), and completely revised in the first four chapters, with the title Giustino Roncella nato Boggiòlo. During his life the author never republished this novel for reasons of discretion; within are implicit references to the writer Grazia Deledda. But the work which absorbed most of his energies at this time was the collection of stories La Vendetta del Cane, Quando s'è capito il giuoco, Il treno ha fischiato, Filo d'aria and Berecche e la guerra. They were all published from 1913 to 1914 and are all now considered classics of Italian literature. First World War As Italy entered the First World War, Pirandello's son Stefano volunteered for service and was taken prisoner by the Austro-Hungarians. In 1916 the actor Angelo Musco successfully recited the three-act comedy that the writer had extracted from the novella Pensaci, Giacomino! and the pastoral comedy Liolà. In 1917 the collection of novellas E domani Lunedì (And Tomorrow, Monday...) was published, but the year was mostly marked by important theatrical representations: Così è (se vi pare) (Right you are (if you think so)), A birrita cu' i ciancianeddi and Il Piacere dell'onestà (The Pleasure Of Honesty). A year later, Ma non è una cosa seria (But It's Nothing Serious) and Il Gioco delle parti (The Game of Roles) were all produced on stage. Pirandello's son Stefano returned home when the war ended. In 1919 Pirandello had his wife placed in an asylum. The separation from his wife, despite her morbid jealousies and hallucinations, caused great suffering for Pirandello who, even as late as 1924, believed he could still properly care for her at home. She never left the asylum. 1920 was the year of comedies such as Tutto per bene, Come prima meglio di prima, and La Signora Morli. In 1921, the Compagnia di Dario Niccodemi staged, at the Valle di Roma, the play, Sei personaggi in cerca d'autore, Six Characters in Search of an Author. It was a clamorous failure. The public divided into supporters and adversaries, the latter of whom shouted, "Asylum, Asylum!" The author, who was present at the performance with his daughter Lietta, left through a side exit to avoid the crowd of enemies. The same drama, however, was a great success when presented in Milan. In 1922 in Milan, Enrico IV was performed for the first time and was acclaimed universally as a success. Pirandello's international reputation was developing as well. The Sei personaggi was performed in London and New York. Italy under the Fascists In 1924 Pirandello wrote a letter to Benito Mussolini asking him to be accepted as a member of the National Fascist Party. In 1925, Pirandello, with the help of Mussolini, assumed the artistic direction and ownership of the Teatro d'Arte di Roma, founded by the Gruppo degli Undici. He described himself as "a Fascist because I am Italian." For his devotion to Mussolini, the satirical magazine Il Becco Giallo used to call him P. Randello (randello in Italian means cudgel). He expressed publicly apolitical belief, saying "I'm apolitical, I'm only a man in the world." He had continuous conflicts with fascist leaders. In 1927 he tore his fascist membership card to pieces in front of the startled secretary-general of the Fascist Party. For the remainder of his life, Pirandello was always under close surveillance by the secret fascist police OVRA. His play, The Giants of the Mountain, has been interpreted as evidence of his realization that the fascists were hostile to culture; yet, during a later appearance in New York, Pirandello distributed a statement announcing his support of Italy's annexation of Abyssinia. He gave his Nobel Prize medal to the Fascist government to be melted down for the Abyssinia Campaign. Mussolini's support brought him international fame and a worldwide tour, introducing his work to London, Paris, Vienna, Prague, Budapest, Germany, Argentina, and Brazil. Pirandello's conception of the theatre underwent a significant change at this point. The idea of the actor as an inevitable betrayer of the text, as in the Sei personaggi, gave way to the identification of the actor with the character that they play. The company took their act throughout the major cities of Europe, and the Pirandellian repertoire became increasingly well known. Between 1925 and 1926 Pirandello's last and perhaps greatest novel, Uno, Nessuno e Centomila (One, No one and One Hundred Thousand), was published serially in the magazine Fiera Letteraria. He was one of the contributors of the nationalist women's magazine, Lidel, which was established in 1919. Pirandello was nominated Academic of Italy in 1929, and in 1934 he was awarded the Nobel Prize for Literature after he had been nominated by Guglielmo Marconi, member of the Royal Academy of Italy. He was the last Italian playwright to be chosen for the award until 9 October 1997. Pirandello died alone in his home at Via Bosio, Rome, on 10 December 1936. He refused a State funeral offered by Mussolini and only in 1947 were his cremated remains buried in Sicily. Selected works Major plays 1916: Liolà 1917: Così è (se vi pare) (So It Is (If You Think So)) 1917: Il piacere dell'onestà (The Pleasure of Honesty) 1918: Il gioco delle parti (The Rules of the Game) 1919: L'uomo, la bestia e la virtù (Man, Beast and Virtue) 1921: Sei personaggi in cerca d'autore (Six Characters in Search of an Author) 1922: Enrico IV (Henry IV) 1922: L'imbecille (The Imbecile) 1922: Vestire gli ignudi (To Clothe the Naked) 1923: L'uomo dal fiore in bocca (The Man with the Flower in His Mouth) 1923: L'altro figlio (The Other Son) 1923: La vita che ti diedi (The Life I Gave You) 1924: Ciascuno a suo modo (Each in His Own Way) 1924: Sagra del Signore della Nave (The Rite of the Lord of the Ship) 1926: L'Amica delle Mogli (The Friend of the Wives) 1926: Bellavita (Bellavita) 1927: Diana e la Tuda (Diana and Tuda) 1929: O di Uno o di Nessuno (Either of One or of None) 1929: Come Tu Mi Vuoi (How You Love Me) 1930: Questa sera si recita a soggetto (Tonight We Improvise) Novels 1902: Il turno (The Turn) 1904: Il fu Mattia Pascal (The Late Mattia Pascal) 1908: L'esclusa (The Excluded Woman) 1911: Suo marito (Her Husband) 1913: I vecchi e i giovani (The Old and the Young) 1915: Si Gira, Quaderni di Serafino Gubbio (Shoot!, The Notebooks of Sarafino Gubbio, Cinematograph Operator, 1926 English translation by C. K. Scott Moncrieff) 1926: Uno, nessuno e centomila (One, No One and One Hundred Thousand) Short stories 1922–37: Novelle per un anno (Short Stories for a Year), 15 volumes. A selection of thirty stories was translated by Virginia Jewiss as Stories for the Years (Yale, 2021). Poetry 1889: Mal giocondo (Playful Evil) 1891: Pasqua |
denominations venerate him as Saint Luke the Evangelist and as a patron saint of artists, physicians, bachelors, surgeons, students and butchers; his feast day is 18 October. Life Many scholars believe that Luke was a Greek physician who lived in the Greek city of Antioch in Ancient Syria, although some other scholars and theologians think Luke was a Hellenic Jew. While it has been widely accepted that the theology of Luke–Acts points to a gentile Christian writing for a gentile audience, some have concluded that it is more plausible that Luke–Acts is directed to a community made up of both Jewish and gentile Christians since there is stress on the scriptural roots of the gentile mission (see the use of Isaiah 49:6 in Luke–Acts). Others have only been prepared to conclude that Luke was either a Hellenistic Jew or a god-fearer. His most early notice is in Paul's Epistle to Philemon—. He is also mentioned in and , two Pauline works. The next earliest account of Luke is in the Anti-Marcionite Prologue to the Gospel of Luke, a document once thought to date to the 2nd century, but which has more recently been dated to the later 4th century. Helmut Koester, however, claims that the following part, the only part preserved in the original Greek, may have been composed in the late 2nd century: Epiphanius states that Luke was one of the Seventy Apostles (Panarion 51.11), and John Chrysostom indicates at one point that the "brother" Paul mentions in the Second Epistle to the Corinthians 8:18 is either Luke or Barnabas. (Homily 18 on Second Corinthians on 2 Corinthians 8:18) If one accepts that Luke was indeed the author of the Gospel bearing his name and also the Acts of the Apostles, certain details of his personal life can be reasonably assumed. While he does exclude himself from those who were eyewitnesses to Jesus' ministry, he repeatedly uses the word "we" in describing the Pauline missions in Acts of the Apostles, indicating that he was personally there at those times. There is similar evidence that Luke resided in Troas, the province which included the ruins of ancient Troy, in that he writes in Acts in the third person about Paul and his travels until they get to Troas, where he switches to the first person plural. The "we" section of Acts continues until the group leaves Philippi, when his writing goes back to the third person. This change happens again when the group returns to Philippi. There are three "we sections" in Acts, all following this rule. Luke never stated, however, that he lived in Troas, and this is the only evidence that he did. The composition of the writings, as well as the range of vocabulary used, indicate that the author was an educated man. A quote in the Epistle to the Colossians differentiates between Luke and other colleagues "of the circumcision." This comment has traditionally caused commentators to conclude that Luke was a gentile. If this were true, it would make Luke the only writer of the New Testament who can clearly be identified as not being Jewish. However, that is not the only possibility. Although Luke is considered likely to have been a gentile Christian, some scholars believe him to have been a Hellenized Jew. The phrase could just as easily be used to differentiate between those Christians who strictly observed the rituals of Judaism and those who did not. Luke's presence in Rome with the Apostle Paul near the end of Paul's life was attested by 2 Timothy 4:11: "Only Luke is with me". In the last chapter of the Book of Acts, widely attributed to Luke, there are several accounts in the first person also affirming Luke's presence in Rome, including : "And when we came to Rome... ." According to some accounts, Luke also contributed to the authorship of the Epistle to the Hebrews. Luke died at age 84 in Boeotia, according to a "fairly early and widespread tradition". According to Nikephoros Kallistos Xanthopoulos, Greek historian of the 14th century (and others), Luke's tomb was located in Thebes, whence his relics were transferred to Constantinople in the year 357. Authorship of Luke and Acts The Gospel of Luke does not name its author. The Gospel was not, nor does it claim to be, written by direct witnesses to the reported events, unlike Acts beginning in the sixteenth chapter. However, in most translations the author suggests that they have investigated the book's events and notes the name (Theophilus) of that to whom they are writing. The earliest manuscript of the Gospel (Papyrus 75 = Papyrus Bodmer XIV-XV), dated circa AD 200, ascribes the work to Luke; as did Irenaeus writing circa AD 180, and the Muratorian fragment, a 7th-century Latin manuscript thought to | been, for many, the final nail in Luke the historian's coffin." Robert M. Grant has noted that although Luke saw himself within the historical tradition, his work contains a number of statistical improbabilities, such as the sizable crowd addressed by Peter in Acts 4:4. He has also noted chronological difficulties whereby Luke "has Gamaliel refer to Theudas and Judas in the wrong order, and Theudas actually rebelled about a decade after Gamaliel spoke (5:36–7)", though this report's status as a chronological difficulty is hotly disputed Brent Landau writes: As an artist Christian tradition, starting from the 8th century, states that Luke was the first icon painter. He is said to have painted pictures of the Virgin Mary and Child, in particular the Hodegetria image in Constantinople (now lost). Starting from the 11th century, a number of painted images were venerated as his autograph works, including the Black Madonna of Częstochowa and Our Lady of Vladimir. He was also said to have painted Saints Peter and Paul, and to have illustrated a gospel book with a full cycle of miniatures. Late medieval Guilds of Saint Luke in the cities of Late Medieval Europe, especially Flanders, or the "Accademia di San Luca" (Academy of Saint Luke) in Rome—imitated in many other European cities during the 16th century—gathered together and protected painters. The tradition that Luke painted icons of Mary and Jesus has been common, particularly in Eastern Orthodoxy. The tradition also has support from the Saint Thomas Christians of India who claim to still have one of the Theotokos icons that Saint Luke painted and which Saint Thomas brought to India. Symbol In traditional depictions, such as paintings, evangelist portraits, and church mosaics, Saint Luke is often accompanied by an ox or bull, usually having wings. Sometimes only the symbol is shown, especially when in a combination of those of all Four Evangelists. Relics Despot George of Serbia purportedly bought the relics from the Ottoman sultan Murad II for 30,000 gold coins. After the Ottoman conquest of Bosnia, the kingdom's last queen, George's granddaughter Mary, who had brought the relics with her from Serbia as her dowry, sold them to the Venetian Republic. In 1992, the then Greek Orthodox Metropolitan Ieronymos of Thebes and Levathia (who subsequently became Archbishop Ieronymos II of Athens and All Greece) requested from Bishop Antonio Mattiazzo of Padua the return of "a significant fragment of the relics of St. Luke to be placed on the site where the holy tomb of the Evangelist is located and venerated today". This prompted a scientific investigation of the relics in Padua, and by numerous lines of empirical evidence (archeological analyses of the Tomb in Thebes and the Reliquary of Padua, anatomical analyses of the remains, carbon-14 dating, comparison with the purported skull of the Evangelist located in Prague) confirmed that these were the remains of an individual of Syrian descent who died between AD 72 and AD 416. The Bishop of Padua then delivered to Metropolitan Ieronymos the rib of Saint Luke that was closest to his heart to be kept at his tomb in Thebes. Thus, the relics of Saint Luke are divided as follows: The body, in the Abbey of Santa Giustina in Padua; The head, in the St. Vitus Cathedral in Prague; A rib, at his tomb in Thebes. Gallery See also John the Evangelist Mark the Evangelist Matthew the Evangelist References Notes Citations Sources This is a three-volume work documenting the international congress in Padua in 2000 on |
five miles northeast of Gillsburg, Mississippi. Ronnie Van Zant and Steve Gaines, along with backup singer Cassie Gaines (Steve's older sister), assistant road manager Dean Kilpatrick, pilot Walter McCreary, and co-pilot William Gray were killed on impact. Other band members (Collins, Rossington, Wilkeson, Powell, Pyle, and Hawkins), tour manager Ron Eckerman, and several road crew members suffered serious injuries. The accident came just three days after the release of Street Survivors. Following the crash and the ensuing press, Street Survivors became the band's second platinum album and reached No. 5 on the U.S. album chart. The single "What's Your Name" reached No. 13 on the single charts in 1978. The original cover sleeve for Street Survivors had featured a photograph of the band amid flames, with Steve Gaines nearly obscured by fire. Out of respect for the deceased (and at the request of Teresa Gaines, Steve's widow), MCA Records withdrew the original cover and replaced it with the album's back photo, a similar image of the band against a simple black background. However, the group would restore the original image for the 30th anniversary deluxe edition of the album. Lynyrd Skynyrd disbanded after the tragedy, reuniting only on one occasion to perform an instrumental version of "Free Bird" at Charlie Daniels' Volunteer Jam V in January 1979. Collins, Rossington, Powell, and Pyle were joined by Daniels and members of his band. Leon Wilkeson, who was still undergoing physical therapy for his badly broken left arm, was in attendance, along with Judy Van Zant, Teresa Gaines, JoJo Billingsley, and Leslie Hawkins. Hiatus (1977–1987) Rossington, Collins, Wilkeson and Powell formed the Rossington Collins Band, which released two MCA albums, Anytime, Anyplace, Anywhere in 1980 and This Is The Way in 1981. Deliberately avoiding comparisons with Ronnie Van Zant as well as suggestions that this band was Lynyrd Skynyrd reborn, Rossington and Collins chose a woman, Dale Krantz, as the lead vocalist. However, as an acknowledgement of their past, the band's concert encore would always be an instrumental version of "Free Bird". Rossington and Collins eventually had a falling out over the affections of Dale Krantz, whom Rossington married and with whom he formed The Rossington Band, which released two albums, Returned to the Scene of the Crime in 1986 and Love Your Man in 1988 and also opened for the Lynyrd Skynyrd Tribute Tour in 1987–1988. The other former members of Lynyrd Skynyrd continued to make music during the hiatus era. Billy Powell played keyboards in a Christian rock band named Vision, touring with established Christian rocker Mylon LeFevre. During Vision concerts, Powell's trademark keyboard talent was often spotlighted and he spoke about his conversion to Christianity after the near-fatal plane crash. Pyle formed the Artimus Pyle Band in 1982, which occasionally featured former Honkettes JoJo Billingsley and Leslie Hawkins and released one MCA album, titled A.P.B. In 1980, Allen Collins's wife Kathy died of a massive hemorrhage while miscarrying their third child. He formed the Allen Collins Band in 1983 from the remnants of the Rossington Collins Band and released one MCA studio album, Here, There & Back. He was visibly suffering from Kathy's death; he excessively drank and consumed drugs. On January 29, 1986, Collins, then 33, crashed his Ford Thunderbird into a ditch near his home in Jacksonville, killing his girlfriend Debra Jean Watts and leaving himself permanently paralyzed from the chest down. Return (1987–1995) In 1987, Lynyrd Skynyrd reunited for a full-scale tour with five major members of the pre-crash band: crash survivors Gary Rossington, Billy Powell, Leon Wilkeson and Artimus Pyle, along with guitarist Ed King, who had left the band two years before the crash. Ronnie Van Zant's younger brother, Johnny, took over as the new lead singer and primary songwriter. Due to founding member Allen Collins' paralysis from his 1986 car accident, he was only able to participate as the musical director, choosing Randall Hall, his former bandmate in the Allen Collins Band, as his stand-in. In return for avoiding prison following his guilty plea to DUI manslaughter Collins would be wheeled out onstage each night to explain to the audience why he could no longer perform (usually before the performance of "That Smell", the lyrics of which had been partially directed at him). Collins was stricken with pneumonia in 1989 and died on January 23, 1990, at age 37. The reunited band was intended to be a one-time tribute to the original lineup, captured on the double-live album Southern by the Grace of God: Lynyrd Skynyrd Tribute Tour 1987. That the band chose to continue after the 1987 tribute tour caused legal problems for the survivors, as Judy Van Zant Jenness and Teresa Gaines Rapp (widows of Ronnie and Steve, respectively) sued the others for violating an agreement made shortly after the plane crash, stating that they would not "exploit" the Skynyrd name for profit. As part of the settlement, Jenness and Rapp collect nearly 30% of the band's touring revenues (representing the shares their husbands would have earned had they lived), and hold a proviso requiring any band touring as Lynyrd Skynyrd to include Rossington and at least two of the other four surviving members from the pre-crash era, namely Wilkeson, Powell, King and Pyle. Following this rule, the band would have been forced to retire in 2001, but they have still continued to tour for another two decades. The band released its first post-reunion album in 1991, entitled Lynyrd Skynyrd 1991. By that time, the band had added a second drummer, Kurt Custer. Artimus Pyle left the band during the same year, with Custer becoming the band's sole drummer. That lineup released a second post-reunion album, entitled The Last Rebel in 1993. Later that year, Randall Hall was replaced by Mike Estes. Member changes and deaths (1996–2019) Ed King had to take a break from touring in 1996 due to heart complications that required a transplant. In his absence, he was replaced by Hughie Thomasson. The band did not let King rejoin after he recovered. At the same time, Mike Estes was replaced by Rickey Medlocke, who had previously played and recorded with the band for a short time in the early 1970s. The result was a major retooling of the band's 'guitar army'. Medlocke and Thomasson would also become major contributors to the band's songwriting along with Rossington and Van Zant. The first album with this new lineup, released in 1997, was entitled Twenty. The band released another album, Edge of Forever in 1999. By that time, Hale had left the band, and the drums on the album were played by session drummer Kenny Aronoff. Michael Cartellone became the band's permanent drummer on the subsequent tour. Despite the growing number of post-reunion albums that the band had released up to this time, setlists showed that the band was playing mostly 1970s-era material in concert. The band released a Christmas album, entitled Christmas Time Again in 2000. Leon Wilkeson, Skynyrd's bassist since 1972, was found dead in his hotel room on July 27, 2001. His death was found to be due to emphysema and chronic liver disease. He was replaced in 2001 by Ean Evans. The first album to feature Evans was Vicious Cycle, released in 2003. This album had improved sales over the other post-reunion albums, and had a minor hit single in the song "Red, White and Blue". The band also released a double collection album called Thyrty, which had songs from the original lineup to the present, and also a live DVD of their Vicious Cycle Tour and on June 22, 2004, the album Lynyrd Skynyrd Lyve: The Vicious Cycle Tour. Mark "Sparky" Matejka, formerly of the country music band Hot Apple Pie, joined Lynyrd Skynyrd in 2006 as Thomasson's replacement. On November 2, 2007, the band performed for a crowd of 50,000 people at the University of Florida's Gator Growl student-run pep rally in Ben Hill Griffin Stadium ("The Swamp" football stadium). This was the largest crowd that Lynyrd Skynyrd had played to in the U.S., until the July 2008 Bama Jam in Enterprise, Alabama where more than 111,000 people attended. On January 28, 2009, keyboardist Billy Powell died of a suspected heart attack at age 56 at his home near Jacksonville, Florida. No autopsy was carried out. He was replaced by Peter Keys. On March 17, 2009, it was announced that Skynyrd had signed a worldwide deal with Roadrunner Records, in association with their label, Loud & Proud Records, and released their new album God & Guns on September 29 of that year. They toured Europe and the U.S. in 2009 with Keys on keyboards and Robert Kearns of the Bottle Rockets on bass; bassist Ean Evans died of cancer at age 48 on May 6, 2009. Scottish rock band Gun performed as special guests for the UK leg of Skynyrd's tour in 2010. In addition to the tour, Skynyrd appeared at the Sean Hannity Freedom Concert series in late 2010. Hannity had been actively promoting the God & Guns album, frequently playing portions of the track "That Ain't My America" on his radio show. The tour is titled "Rebels and Bandoleros". The band continued to tour throughout 2011, playing alongside ZZ Top and the Doobie Brothers. On May 2, 2012, the band announced the impending release of a new studio album, Last of a Dyin' Breed, along with a North American and European tour. On August 21, 2012, Last of a Dyin' Breed was released. In celebration, the band did four autograph signings throughout the southeast. Lynyrd Skynyrd have used a Confederate flag since the 1970s and several criticisms have been raised against them because of this. While promoting the album on CNN on September 9, 2012, members of the band talked about its discontinued use of Confederate imagery. In September 2012, the band briefly did not display the Confederate flag, which had for years been a part of their stage show, because they did not want to be associated with racists that adopted the flag. However, after protests from fans, they reversed this decision, citing it as part of their Southern American heritage and states' rights symbolism. Original drummer Bob Burns died aged 64 on April 3, 2015; his car crashed into a tree while he was driving alone near his home in Cartersville, Georgia. From 2015 through 2017, the band had periods of being sidelined or having to cancel shows due to health problems suffered by founding member Gary Rossington. Former member Ed King, who had been battling cancer, died in his Nashville, Tennessee home on August 22, 2018 at 68 years of age. Farewell tour and upcoming fifteenth album (2018–present) On January 25, 2018, Lynyrd Skynyrd announced their Last of the Street Survivors Farewell Tour, which started on May 4, 2018. Supporting acts include Kid Rock, Hank Williams Jr., Bad Company, the Charlie Daniels Band, the Marshall Tucker Band, .38 Special, Cheap Trick, Blackberry Smoke, the Randy Bachman Band, Blackfoot, Massive Wagons, and Status Quo. Concerts are usually on Fridays and Saturdays. On January 8, 2020, Rossington stated in an interview that while they would no longer be touring, they will continue to play occasional live shows. On March 19, 2019, Johnny Van Zant announced that the band intended to go into the studio to record one last album after completing the tour with several songs ready or "in the can". They appeared at the Kaaboo Texas festival on May 11, 2019. Lynyrd Skynyrd was among hundreds of recording artists whose original master recordings were believed to have been destroyed in the 2008 Universal fire. Though it is not known with certainty which, if any, of the band's master recordings were lost in the blaze, Lynyrd Skynyrd was among the artists listed in an internal Universal Music Group document listing the artists whose master recordings the company believed had been lost and subsequently spent tens of millions of dollars trying to replace. Recognition Honors In 2004, Rolling Stone magazine ranked the group No. 95 on their list of the "100 Greatest Artists of All Time". On November 28, 2005, the Rock and Roll Hall of Fame announced that Lynyrd Skynyrd would be inducted alongside Black Sabbath, Blondie, Miles Davis, and the Sex Pistols. They were inducted in the Waldorf Astoria Hotel in Manhattan on March 13, 2006 during the Hall's 21st annual induction ceremony. The inductees included Ronnie Van Zant, Allen Collins, Gary Rossington, Ed King, Steve Gaines, Billy Powell, Leon Wilkeson, Bob Burns, and Artimus Pyle. Tributes In 2010, another country tribute album was produced, primarily by Jay Joyce, titled Sweet Home Alabama – The Country Music Tribute to Lynyrd Skynyrd. This album features | local concerts, and opening for several national acts. Pat Armstrong, a Jacksonville native and partner in Macon, Georgia-based Hustlers Inc. with Phil Walden's younger brother, Alan Walden, became the band's managers. Armstrong left Hustlers shortly thereafter to start his own agency. Walden stayed with the band until 1974, when management was turned over to Peter Rudge. The band continued to perform throughout the South in the early 1970s, further developing their hard-driving blues rock sound and image, and experimenting with recording their sound in a studio. Skynyrd crafted this distinctively "southern" sound through a creative blend of country, blues, and a slight British rock influence. During this time, the band experienced some lineup changes for the first time. Junstrom left and was briefly replaced by Greg T. Walker on bass. At that time, Rickey Medlocke joined as a second drummer and occasional second vocalist to help fortify Burns' sound on the drums. Medlocke grew up with the founding members of Lynyrd Skynyrd and his grandfather Shorty Medlocke was an influence in the writing of "The Ballad of Curtis Loew". Some versions of the band's history state Burns briefly left the band during this time, although other versions state that Burns played with the band continuously through 1974. Peak (1973–1977) In 1972, the band (then comprising Van Zant, Collins, Rossington, Burns, Wilkeson, and Powell) was discovered by musician, songwriter, and producer Al Kooper of Blood, Sweat & Tears, who had attended one of their shows at Funocchio's in Atlanta. Kooper signed them to his Sounds of the South label, which was to be distributed and supported by MCA Records, and produced their first album. Wilkeson, citing nervousness about fame, temporarily left the band during the early recording sessions, playing on only two tracks. He rejoined the band shortly after the album's release at Van Zant's invitation and is pictured on the album cover. To replace him, Strawberry Alarm Clock guitarist Ed King joined the band and played bass on the album (the only part that Wilkeson had not already written being the solo section in "Simple Man"), and also contributed to the songwriting and did some guitar work on the album. After Wilkeson rejoined, King stayed in the band and switched solely to guitar, allowing the band to replicate its three-guitar studio mix in live performances. The band released their debut album (Pronounced 'Lĕh-'nérd 'Skin-'nérd) on August 13, 1973. It sold over one million copies and was awarded a gold disc by the RIAA. The album featured the hit song "Free Bird", which received national airplay, eventually reaching No. 19 on the Billboard Hot 100 chart. Lynyrd Skynyrd's fan base continued to grow rapidly throughout 1973, largely due to their opening slot on the Who's Quadrophenia tour in the United States. Their 1974 follow-up album, Second Helping, featuring King, Collins and Rossington all collaborating with Van Zant on the songwriting, cemented the band's breakthrough. Its single "Sweet Home Alabama", a response to Neil Young's "Southern Man", reached #8 on the charts that August. (Young and Van Zant were not rivals, but fans of each other's music and good friends; Young wrote the song "Powderfinger" for the band, but they never recorded it.) During their peak years, most of their records sold over one million copies, but "Sweet Home Alabama" was the only single to crack the top ten. By 1975, personal issues began to take their toll on the band. In January, drummer Burns left the band after suffering a mental breakdown during a European tour and was replaced by Kentucky native and former US Marine Artimus Pyle. The band's third album, Nuthin' Fancy, was recorded in 17 days. Unhappy with the band's lack of preparation for the album's recording, Kooper and the band parted ways by mutual agreement after the tracking was completed, with Kooper mixing the album while the band left for the tour that had precipitated the constricted recording schedule. Though the album fared well, it ultimately had lower sales than its predecessors. Midway through the Nuthin' Fancy tour, guitarist Ed King abruptly left the band after a falling out with Van Zant. Van Zant and King's guitar roadie were arrested the night prior and spent the night in jail. With his guitar roadie unavailable, King played that night's show with old strings that broke and caused his performance to be substandard, and Van Zant subsequently belittled him in front of his bandmates. King quit and returned home to Los Angeles, believing Van Zant had been responsible for his guitar roadie being in jail in the first place. Collins and Rossington both had serious car accidents over Labor Day weekend in 1976, which slowed the recording of the follow-up album and forced the band to cancel some concert dates. Rossington's accident inspired the ominous Van Zant/Collins composition "That Smell" – a cautionary tale about drug abuse that was clearly aimed towards him and at least one other band member. Rossington has admitted repeatedly that he was the "Prince Charming" of the song who crashed his car into an oak tree while drunk and stoned on Quaaludes. With the birth of his daughter Melody in 1976, Van Zant was making a serious attempt to clean up his act and curtail the cycle of boozed-up brawling that was part of Skynyrd's reputation. The Street Survivors album of 1977 turned out to be a showcase for guitarist/vocalist Steve Gaines, who had joined the band just a year earlier and was making his studio debut with them. Publicly and privately, Ronnie Van Zant marveled at the multiple talents of Skynyrd's newest member, claiming that the band would "all be in his shadow one day". Gaines' contributions included his co-lead vocal with Van Zant on the co-written "You Got That Right" and the rousing guitar boogie "I Know a Little", which he had written before he joined Skynyrd. So confident was Skynyrd's leader of Gaines' abilities that the album (and some concerts) featured Gaines delivering his self-penned bluesy "Ain't No Good Life" – the only song in the pre-crash Skynyrd catalog to feature a lead vocalist other than Ronnie Van Zant. The album also included the hit singles "What's Your Name" and "That Smell". The band was poised for their biggest tour yet, with shows always highlighted by the iconic rock anthem "Free Bird". Plane crash (1977) Following a performance at the Greenville Memorial Auditorium in Greenville, South Carolina, on October 20, 1977, the band boarded a chartered Convair CV-240 bound for Baton Rouge, Louisiana, where they were scheduled to appear at LSU the following night. After running out of fuel, the pilots |
searching, variants are spelled completely, and listed in most likely chronology. Superscripts indicate: Latinized form of the Greek-derived name. Latinized form of the Asian-derived name via Greek. Altered Latinized form of the Greek-derived name. Cities and towns in Austria Cities and towns in Belgium Cities and towns in the Czech Republic Cities and towns in Denmark Cities and towns in Estonia Cities and towns in Finland Cities and towns in France Cities and towns in Germany Cities and towns in Hungary Cities and towns in Ireland Cities and towns in Latvia Cities and towns in Moldova Cities and towns in Monaco Cities and towns in the Netherlands Cities and towns in Norway Cities and towns in Poland Cities and towns in Russia Cities and towns in San Marino Cities and towns in Slovakia Cities and towns in Slovenia Cities and towns in Sweden Cities and towns in Switzerland Cities and towns in Ukraine See also List of Latin place names in Iberia List of Latin | and listed in most likely chronology. Superscripts indicate: Latinized form of the Greek-derived name. Latinized form of the Asian-derived name via Greek. Altered Latinized form of the Greek-derived name. Cities and towns in Austria Cities and towns in Belgium Cities and towns in the Czech Republic Cities and towns in Denmark Cities and towns in Estonia Cities and towns in Finland Cities and towns in France Cities and towns in Germany Cities and towns in Hungary Cities and towns in Ireland Cities and towns in Latvia Cities and towns in Moldova Cities and towns in Monaco Cities and towns in the Netherlands Cities and towns in Norway Cities and towns in Poland Cities and towns in Russia Cities and towns in San Marino Cities and towns in Slovakia Cities and towns in Slovenia Cities and towns in Sweden Cities and towns in Switzerland Cities and towns in Ukraine See also List of Latin place names in Iberia List of Latin place names in the Balkans List of Latin place names used as specific names References Sources In order of likely publication: PNH: Pliny (Gaius Plinius |
a famous dispute erupted between Ferrari and his contemporary Niccolò Fontana Tartaglia, involving the solution to cubic equations. Widespread stories that Tartaglia devoted the rest of his life to ruining Ferrari's teacher and erstwhile master Cardano, however, appear to be fabricated. Mathematical historians now credit both Cardano and Tartaglia with the formula to solve cubic equations, referring to it as the "Cardano–Tartaglia formula". | him mathematics. Ferrari aided Cardano on his solutions for quadratic equations and cubic equations, and was mainly responsible for the solution of quartic equations that Cardano published. While still in his teens, Ferrari was able to obtain a prestigious teaching post in Rome after Cardano resigned from it and recommended him. Ferrari retired when young at 42 years old, and wealthy. He then moved back to his home town of Bologna where he lived with his widowed sister Maddalena to take up a professorship of mathematics at the University of Bologna in 1565. Shortly thereafter, he died of white arsenic poisoning, according to a |
but it can also be seen in a number of genetic disorders. Though incurable and progressive, a number of treatments can improve symptoms. Tissues with lymphedema are at high risk of infection because the lymphatic system has been compromised. While there is no cure, treatment may improve outcomes. This commonly include compression therapy, good skin care, exercise, and manual lymphatic drainage (MLD), which together is known as combined decongestive therapy. Diuretics are not useful. Surgery is generally only used in those who are not improved with other measures. Signs and symptoms The most common manifestation of lymphedema is soft tissue swelling, edema. As the disorder progresses, worsening edema and skin changes including discoloration, verrucous (wart-like) hyperplasia, hyperkeratosis, papillomatosis, dermal thickening and ulcers may be seen. Additionally, there is increased risk of infection of the skin, known as cellulitis. Complications When the lymphatic impairment becomes so great that the lymph fluid exceeds the lymphatic system's ability to transport it, an abnormal amount of protein-rich fluid collects in the tissues. Left untreated, this stagnant, protein-rich fluid causes tissue channels to increase in size and number, reducing oxygen availability. This interferes with wound healing and provides a rich culture medium for bacterial growth that can result in infections, cellulitis, lymphangitis, lymphadenitis and, in severe cases, skin ulcers. It is vital for lymphedema patients to be aware of the symptoms of infection and to seek immediate treatment, since recurrent infections or cellulitis, in addition to their inherent danger, further damage the lymphatic system and set up a vicious circle. In rare cases, lymphedema can lead to a form of cancer called lymphangiosarcoma, although the mechanism of carcinogenesis is not understood. Lymphedema-associated lymphangiosarcoma is called Stewart-Treves syndrome. Lymphangiosarcoma most frequently occurs in cases of long-standing lymphedema. The incidence of angiosarcoma is estimated to be 0.45% in patients living 5 years after radical mastectomy. Lymphedema is also associated with a low grade form of cancer called retiform hemangioendothelioma (a low grade angiosarcoma). Lymphedema can be disfiguring, and may result in a poor body image, which can cause psychological distress. Complications of lymphedema can cause difficulties in activities of daily living. Causes Lymphedema may be inherited (primary) or caused by injury to the lymphatic vessels (secondary). Lymph node damage It is most frequently seen after lymph node dissection, surgery and/or radiation therapy, in which damage to the lymphatic system is caused during the treatment of cancer, most notably breast cancer. In many patients with cancer, this condition does not develop until months or even years after therapy has concluded. Lymphedema may also be associated with accidents or certain diseases or problems that may inhibit the lymphatic system from functioning properly. In tropical areas of the world, a common cause of secondary lymphedema is filariasis, a parasitic infection. It can also be caused by damage to the lymphatic system from infections such as cellulitis. Primary lymphedema may be congenital or arise sporadically. Multiple syndromes are associated with primary lymphedema, including Turner syndrome, Milroy's disease, and Klippel-Trenaunay-Weber syndrome. It is generally thought to occur as a result of absent or malformed lymph nodes and/or lymphatic channels. Lymphedema may be present at birth, develop at the onset of puberty (praecox), or not become apparent for many years into adulthood (tarda). In men, lower-limb primary lymphedema is most common, occurring in one or both legs. Some cases of lymphedema may be associated with other vascular abnormalities. Secondary lymphedema affects both men and women. In women, it is most prevalent in the upper limbs after breast cancer surgery, in particular after axillary lymph node dissection, occurring in the arm on the side of the body in which the surgery is performed. Breast and trunk lymphedema can also occur but go unrecognised as there is swelling in the area after surgery, and its symptoms (peau d'orange and/or an inverted nipple) can be confused with post surgery fat necrosis. In Western countries, secondary lymphedema is most commonly due to cancer treatment. Between 38 and 89% of breast cancer patients suffer from lymphedema due to axillary lymph node dissection and/or radiation. Unilateral lymphedema occurs in up to 41% of patients after gynecologic cancer. For men, a 5-66% incidence of lymphedema has been reported in patients treated with incidence depending on whether staging or radical removal of lymph glands was done in addition to radiotherapy. Head and neck lymphedema can be caused by surgery or radiation therapy for tongue or throat cancer. It may also occur in the lower limbs or groin after surgery for colon, ovarian or uterine cancer, in which removal of lymph nodes or radiation therapy is required. Surgery or treatment for prostate, colon and testicular cancers may result in secondary lymphedema, particularly when lymph nodes have been removed or damaged. The onset of secondary lymphedema in patients who have had cancer surgery has also been linked to aircraft flight (likely due to decreased cabin pressure or relative immobility). For cancer survivors, therefore, wearing a prescribed and properly fitted compression garment may help decrease swelling during air travel. Some cases of lower-limb lymphedema have been associated with the use of tamoxifen, due to the blood clots and deep vein thrombosis (DVT) that can be associated with this medication. Resolution of the blood clots or DVT is needed before lymphedema treatment can be initiated. Infectious causes include lymphatic filariasis. At birth Hereditary lymphedema is a primary lymphedema – swelling that results from abnormalities in the lymphatic system that are present from birth. Swelling may be present in a single affected limb, several limbs, genitalia, or the face. It is sometimes diagnosed prenatally by a nuchal scan or post-natally by lymphoscintigraphy. The most common form is Meige disease that usually presents at puberty. Another form of hereditary lymphedema is Milroy's disease caused by mutations in the VEGFR3 gene. Hereditary lymphedema is frequently syndromic and is associated with Turner syndrome, lymphedema–distichiasis syndrome, yellow nail syndrome, and Klippel–Trénaunay–Weber syndrome. One defined genetic cause for hereditary lymphedema is GATA2 deficiency. This deficiency is a grouping of several disorders caused by common defect, viz., familial or sporadic inactivating mutations in one of the two parental GATA2 genes. These autosomal dominant mutations cause a reduction, i.e. a haploinsufficiency, in the cellular levels of the gene's product, GATA2. The GATA2 protein is a transcription factor critical for the embryonic development, maintenance, and functionality of blood-forming, lympathic-forming, and other tissue-forming stem cells. In consequence of these mutations, cellular levels of GATA2 are deficient and individuals develop over time hematological, immunological, lymphatic, and/or other disorders. GATA2 deficiency-induced defects in the lymphatic vessels and valves underlies the development of lymphedema which is primarily located in the lower extremities but may also occur in other places such as the face or testes (i.e. hydrocele). This form of the deficiency, when coupled with sensorineural hearing loss which may also be due to faulty development of the lymphatic system, is sometimes termed the Emberger syndrome. Primary lymphedema has a quoted incidence of approximately 1-3 births out of every 10,000 births, with a particular female preponderance to male ratio | or right lymphatic duct, which drain into the blood circulation. Diagnosis Diagnosis is generally based on signs and symptoms, with testing used to rule out other potential causes. An accurate diagnosis and staging may help with management. A swollen limb can result from different conditions that require different treatments. Diagnosis of lymphedema is currently based on history, physical exam, and limb measurements. Imaging studies such as lymphoscintigraphy and indocyanine green lymphography are only required when surgery is being considered. However, the ideal method for lymphedema staging to guide the most appropriate treatment is controversial because of several different proposed protocols. Lymphedema can occur in both the upper and lower extremities, and in some cases, the head and neck. Assessment of the extremities first begins with a visual inspection. Color, presence of hair, visible veins, size and any sores or ulcerations are noted. Lack of hair may indicate an arterial circulation problem. Given swelling, the extremities' circumference is measured for reference as time continues. In early stages of lymphedema, elevating the limb may reduce or eliminate the swelling. Palpation of the wrist or ankle can determine the degree of swelling; assessment includes a check of the pulses. The axillary or inguinal nodes may be enlarged due to the swelling. Enlargement of the nodes lasting more than three weeks may indicate infection or other illnesses such as sequela from breast cancer surgery requiring further medical attention. Diagnosis or early detection of lymphedema is difficult. The first signs may be subjective observations such as a feeling of heaviness in the affected extremity. These may be symptomatic of early-stage lymphedema where accumulation of lymph is mild and not detectable by changes in volume or circumference. As lymphedema progresses, definitive diagnosis is commonly based upon an objective measurement of differences between the affected or at-risk limb at the opposite unaffected limb, e.g. in volume or circumference. No generally accepted criterion is definitively diagnostic, although a volume difference of 200 ml between limbs or a 4-cm difference (at a single measurement site or set intervals along the limb) is often used. Bioimpedance measurement (which measures the amount of fluid in a limb) offers greater sensitivity than existing methods. Chronic venous stasis changes can mimic early lymphedema, but the changes in venous stasis are more often bilateral and symmetric. Lipedema can also mimic lymphedema, however lipedema characteristically spares the feet beginning abruptly at the medial malleoli (ankle level). As a part of the initial work-up before diagnosing lymphedema, it may be necessary to exclude other potential causes of lower extremity swelling such as kidney failure, hypoalbuminemia, congestive heart-failure, protein-losing nephropathy, pulmonary hypertension, obesity, pregnancy and drug-induced edema. Classification According to the Fifth WHO Expert Committee on Filariasis the most common method of classification of lymphedema is as follows: (The same classification method can be used for both primary and secondary lymphedema) The International Society of Lymphology (ISL) Staging System is based solely on subjective symptoms, making it prone to substantial observer bias. Imaging modalities have been suggested as useful adjuncts to the ISL staging to clarify the diagnosis. The lymphedema expert Dr. Ming-Huei Cheng developed a Cheng's Lymphedema Grading tool to assess the severity of extremity lymphedema based on objective limb measurements and providing appropriate options for management. I. Grading Grade 1: Spontaneously reversible on elevation. Mostly pitting edema. Grade 2: Non-spontaneously reversible on elevation. Mostly non-pitting edema. Grade 3: Gross increase in volume and circumference of Grade 2 lymphedema, with eight stages of severity given below based on clinical assessments. II. Staging As described by the Fifth WHO Expert Committee on Filariasis, and endorsed by the American Society of Lymphology., the staging system helps to identify the severity of lymphedema. With the assistance of medical imaging apparatus, such as MRI or CT, staging can be established by the physician, and therapeutic or medical interventions may be applied: Stage 0: The lymphatic vessels have sustained some damage that is not yet apparent. Transport capacity is sufficient for the amount of lymph being removed. Lymphedema is not present. Stage 1 : Swelling increases during the day and disappears overnight as the patient lies flat in bed. Tissue is still at the pitting stage: when pressed by the fingertips, the affected area indents and reverses with elevation. Usually, upon waking in the morning, the limb or affected area is normal or almost normal in size. Treatment is not necessarily required at this point. Stage 2: Swelling is not reversible overnight, and does not disappear without proper management. The tissue now has a spongy consistency and is considered non-pitting: when pressed by the fingertips, the affected area bounces back without indentation. Fibrosis found in Stage 2 lymphedema marks the beginning of the hardening of the limbs and increasing size. Stage 3: Swelling is irreversible and usually the limb(s) or affected area becomes increasingly large. The tissue is hard (fibrotic) and unresponsive; some patients consider undergoing reconstructive surgery, called "debulking". This remains controversial, however, since the risks may outweigh the benefits and the further damage done to the lymphatic system may in fact make the lymphedema worse. Stage 4: The size and circumference of the affected limb(s) become noticeably large. Bumps, lumps, or protrusions (also called knobs) on the skin begin to appear. Stage 5: The affected limb(s) become grossly large; one or more deep skin folds is prevalent among patients in this stage. Stage 6: Knobs of small elongated or small rounded sizes cluster together, giving mossy-like shapes on the limb. Mobility of the patient becomes increasingly difficult. Stage 7: The person becomes handicapped, and is unable to independently perform daily routine activities such as walking, bathing and cooking. Assistance from the family and health care system is needed. Grades Lymphedema can also be categorized by its severity (usually referenced to a healthy extremity): Grade 1 (mild edema): Involves the distal parts such as a forearm and hand or a lower leg and foot. The difference in circumference is less than 4 cm and other tissue changes are not yet present. Grade 2 (moderate edema): Involves an entire limb or corresponding quadrant of the trunk. Difference in circumference is 4–6 cm. Tissue changes, such as pitting, are apparent. The patient may experience erysipelas. Grade 3a (severe edema): Lymphedema is present in one limb and its associated trunk quadrant. Circumferential difference is greater than 6 centimeters. Significant skin alterations, such as cornification or keratosis, cysts and/or fistulae, are present. Additionally, the patient may experience repeated attacks of erysipelas. Grade 3b (massive edema): The same symptoms as grade 3a, except that two or more extremities are affected. Grade 4 (gigantic edema): In this stage of |
Later Samuel Ampzing (with the help of Petrus Scriverius) repeated the story in Lavre-Kranz Voor Lavrens Koster Van Haerlem, Eerste Vinder vande Boek-Druckerye (1628) with illustrations of the invention. According to Junius, sometime in the 1420s, Coster was in the Haarlemmerhout carving letters from bark for the amusement of his grandchildren, and observed that the letters left impressions on the sand. He proceeded to invent a new type of ink that did not run, and he began a printing company based on his invention with a primitive typesetting arrangement using moveable type. Since the Haarlemmerhout was burned during a siege by the Kennemers in 1426 during the Hook and Cod wars, this must have been early in the 1420s. Using wooden letters at first, he later used lead and tin movable type. His company prospered and grew. He is said to have printed several books including Speculum Humanae Salvationis with several assistants including the letter cutter Johann Fust, and it was this letter cutter Fust (often spelled Faust) who, when Laurens was nearing death, broke his promise of secrecy and stole his presses and type and took them to Mainz where he started his own printing company. Story by Ulrich Zell There is support for the claim that Coster might be the inventor. In the anonymous Kölner Chronik of 1499, Ulrich Zell, a printing assistant from Cologne, who was then between the age of 60 and 69 years old, claimed that printing had started in Mainz. He based this statement on knowledge that Holland used to print Latin grammar texts (Donatus). Neither Coster nor Haarlem are mentioned in that chronicle. If true, this points to Johann Gutenberg about a decade after Coster's death. However, the first securely dated book by Dutch printers is from 1471, long after Gutenberg. Either way, Coster is somewhat of a Haarlem local "hero", and apart from a statue on the Grote Markt his name can be found in many places in the city. Earliest known Haarlem printer Between 1483 and 1486, Jacob Bellaert worked in Haarlem. His books were known for their artistic | who also served in the admiralty of Amsterdam, Pieter van der Camer (1666-1747), who commissioned his own commemorative medal to celebrate 50 years in the service of the vroedschap of Haarlem in 1743, Jan van Dyck, and Cornelis Ascanius van Sypesteyn (1694-1744), who himself was a collector of medals and who lived at Brederode. This medal set a historical precedent in Haarlem for commemorative medals; Sypesteyn's son Cornelis Ascanius van Sypesteyn (1723-1788) later became the founding director of the learned society Hollandsche Maatschappij der Wetenschappen and its offshoot, the "Oeconomische Tak", and he hired Holtzhey's son Johann Georg to commission prize medals for both societies. 400th anniversary In 1823 Haarlem celebrated the 400th anniversary of Coster's invention with a monument in the Haarlemmerhout. The monument is decorated with Latin inscriptions and a memorial text in Dutch, with symbolic "A" decorations at the top. The celebration was organized by Abraham de Vries, a Coster fan who became Haarlem's first librarian in 1821 and who received a commission from the city fathers to acquire Costeriana, or material relating to Coster's claim to fame. De Vries was supported by the professor and city council member David Jacob van Lennep, who believed the legend and sponsored De Vries by obtaining funds from the city council for the monument. In the period after the Flanders Campaign which led to the French occupation of the Netherlands from 1794 to 1815, Haarlem's economy was severely depressed and the city council sought a local hero. In 1817, Van Lennep (who was in the city council at the time) had also placed the monument De Naald (Heemstede) at his own home in nearby Heemstede. The Germans were insulted by the anniversary celebration and held a similar anniversary celebration the next year. Joh. Enschedé Behind the St. Bavochurch the printing factory of Joh. Enschedé was located, which from 1737 to 1940 printed |
of Cambridge) Range Rover '6×6' Fire Appliance (conversion by Carmichael and Sons of Worcester) for RAF airfield use 130 Defender ambulance 'Llama' prototypes for 101 replacement Models developed for the Australian Army Land Rover Perentie 4×4 and 6×6 Engines During the history of the Land Rover many different engines have been fitted: The inlet-over-exhaust petrol engines ("semi side-valve"), in both four- and six-cylinder variants, which were used for the very first Land Rovers in 1948, and which had their origins in pre-war Rover cars. Displacement of the first models was 1,600 cc. The four-cylinder overhead-valve engines, both petrol and diesel, which first appeared (in diesel form) in 1957, near the end of Series One production, and evolved over the years to the 300 TDi turbodiesel, which remains in production today for some overseas markets. The Buick-sourced all aluminium Rover V8 engine. 1,997 cc Petrol, inlet-over-exhaust: Series I engine, carried over for the first few months of Series II production. 2,052 cc Diesel, overhead-valve: Land Rover's first diesel engine, and one of the first small high-speed diesels produced in the UK. It appeared in 1957, and was used in Series II production until 1961. Looks almost identical to the later 2,286 cc engine, but many internal differences. It produced . 2,286 cc Petrol, overhead-valve, three-bearing crank: 2,286 cc Diesel, overhead-valve, three-bearing crank: Appeared in 1961 alongside the redesigned 2,286 cc petrol engine at the start of Series IIA production, and shared its cylinder block and some other components. It produced . 2,625 cc Petrol, inlet-over-exhaust: Borrowed from the Rover saloon range, in response to demands from mid-1960s Land Rover users for more power and torque. 2,286 cc petrol/diesel, overhead-valve type 11J: five-bearing crank: In 1980, Land Rover finally did something about the crank failures which had plagued its four-cylinder engines for 22 years. These engines lasted beyond the end of Series III production and into the first couple of years of the new Ninety and One Ten ranges. 3,258 cc V8 Petrol: The ex-Buick all alloy V8 engine appeared in the Range Rover right from the start of production in 1970, but did not make its way into the company's utility vehicles until 1979. 2,495 cc petrol, overhead valve: The final development of Land Rover's ohv petrol 'four', with hardened valve seats which allow running on unleaded (or LPG). 2,495 cc diesel, overhead valve, type 12J: Land Rover reworked the old 'two and a quarter' diesel for the 1980s. The injection pump was driven off a toothed belt at the front of the engine (together with the camshaft), a change compared with the older diesels. 2,495 cc turbodiesel, overhead valve, type 19J 2,495 cc turbodiesel, overhead valve, 200TDi and 300TDi: Used in the Defender and Discovery from 1990. The cylinder block was similar to the previous engine, although strengthened but the cylinder head was all-new and a direct injection fuel system was used. 2,495 cc turbodiesel, five-cylinder, TD5: An all-new engine for the second generation Discovery, and the Defender featuring electronic control of the fuel injection system, 'drive by wire' throttle, and other refinements The original Freelander models were available with various Rover K-series engines. In beginning of 2015 they start to use the all new Ingenium engine family, to replaced Ford sourced engines. , most Land Rovers in production are powered by Ford engines. Under the terms of the acquisition, Tata has the right to buy engines from Ford until 2019. Electric vehicles Integrated Electric Rear Axle Drive (ERAD) technology, dubbed e-terrain technology, will allow the vehicle to move off without starting the engine as well as supplying extra power over tough terrain. Land Rover's Diesel ERAD Hybrid was developed as part of a multimillion-pound project supported by the UK Government's Energy Saving Trust, under the low carbon research and development programme. ERAD programme is one of a broad range of sustainability-focused engineering programmes that Land Rover is pursuing, brought together by the company under the collective name "e TERRAIN Technologies". Land Rover presented at the 2008 London Motor Show its new ERAD diesel-electric hybrid in a pair of Freelander 2 (LR2) prototypes. The new hybrid system is being designed as a scalable and modular system that could be applied across a variety of Land Rover models and powertrains. Land Rover unveiled the LRX hybrid concept at the 2008 North American International Auto Show in Detroit, for it to be going into production. An ERAD will enable the car to run on electric power at speeds below . In September 2011, the Range Rover Evoque was launched, though it was based on the LRX hybrid concept presented at the 2008 North American International Auto Show, it did not include the ERAD system, included in the original concept. In February 2013, Land Rover unveiled at the 83rd Geneva Motor Show an All-Terrain Electric Defender that produces zero emissions. The electric vehicle was developed for research purposes following successful trials of the Defender-based electric vehicle, Leopard 1. The vehicle is capable of producing 70kW and 330Nm of torque and has a range of 80 kilometres or in low speed off-road use it can last for up to eight hours before recharging. Abilities Power take-off (PTO) was integral to the Land Rover concept from 1948, enabling farm machinery and many other items to be run with the vehicle stationary. Maurice Wilks' original instruction was "...to have power take-offs everywhere!" The 1949 report by British National Institute of Agricultural Engineering and Scottish Machinery Testing Station contained this description: "the power take-off is driven through a Hardy Spicer propeller shaft from the main gearbox output and two interchangeable pinions giving two ratios. The PTO gearbox casing is bolted to the rear chassis cross-member and an belt pulley driven from the PTO shaft through two bevel gears can be bolted to the PTO gearbox casing." PTOs remained regular options on Series I, II and III Land Rovers up to the demise of the Series Land Rover in 1985. An agricultural PTO on a Defender is possible as a special order. Land Rovers (the Series/Defender models) are available in a variety of body styles, from a simple canvas-topped pick-up truck to a twelve-seat fully trimmed station wagon. Both Land Rover and out-of-house contractors have offered conversions and adaptations to the basic vehicle, such as fire engines, excavators, 'cherry picker' hydraulic platforms, ambulances, snowploughs, and six-wheel-drive versions, as well as one-off special builds including amphibious Land Rovers and vehicles fitted with tracks instead of wheels. Military use Various Land Rover models have been used in a military capacity, most notably by the British Army and Australian Army. Modifications may include military "blackout" lights, heavy-duty suspension, uprated brakes, 24 volt electrics, convoy lights, electronic suppression of the ignition system, blackout curtains and mounts for special equipment and small arms. Dedicated military models have been produced such as the 101 Forward Control and the air-portable 1/2 ton Lightweight. Military uses include light utility vehicle; communications platform; weapon platform for recoilless rifles, Anti-tank (e.g. TOW or M40 recoilless rifle) / Surface-to-Air Guided Weapons or machine guns; ambulances and workshops. The Discovery has also been used in small numbers, mostly as liaison vehicles. Two models that have been designed for military use from the ground up are the 101 Forward Control from the early 1970s and the Lightweight or Airportable from the late 1960s. The latter was intended to be transported under a helicopter. The Royal Air Force Mountain Rescue Service (RAFMRS) teams were early users in the late 1950s and early 1960s, and their convoys of Land Rovers and larger military trucks are a sight often seen in the mountain areas of the United Kingdom. Originally RAFMRS Land Rovers had blue bodies and bright yellow tops, to be better seen from above. In 1981, the colour scheme was changed to green with yellow stripes. More recently, vehicles have been painted white, and are issued with fittings similar to civilian UK Mountain Rescue teams. An adaptation of Land Rovers to military purposes is the "Pink Panther" models. Approximately 100 Series IIA models were adapted to reconnaissance use by British special operations forces the SAS. For desert use they were often painted pink, hence the name. The vehicles were fitted with among other gear a sun compass, machine guns, larger fuel tanks and smoke dischargers. Similar adaptations were later made to Series IIIs and 90/110/Defenders. The Australian Army adapted the Land Rover Series 2 into the Long Range Patrol Vehicle for use by the Special Air Service Regiment and as an anti-tank "gunbuggy" fitted with an M40 recoilless rifle. The 75th Ranger Regiment of the United States Army also adapted twelve versions of the Land Rover that were officially designated the Ranger Special Operations Vehicle. Series and Defender models have also been armoured. The most widespread of these is the Shorts Shorland, built by Shorts Brothers of Belfast. The first of these were delivered in 1965 to the Royal Ulster Constabulary, the Northern Ireland police force. They were originally wheelbase models with an armoured body and a turret from the Ferret armoured car. By 1990, there had been more than 1,000 produced. In the 1970s, a more conventional armoured Land Rover was built for the Royal Ulster Constabulary in Wales called the Hotspur. The Land Rover Tangi was built by the Royal Ulster Constabulary's own vehicle engineering team during the 1990s. The British Army has used various armoured Land Rovers, first in Northern Ireland but also in more recent campaigns. They first added protective panels to Series General Service vehicles (the Vehicle Protection Kit (VPK)). Later they procured the Glover Webb APV and finally the Courtaulds (later NP Aerospace) Composite Armoured Vehicle, commonly known as Snatch. These were originally based on heavy-duty V8 110 chassis but some have recently been re-mounted on new chassis from Otokar of Turkey and fitted with diesel engines and air-conditioning for Iraq. Although these now have more in common with the 'Wolf' (Defender XD) Land Rovers that many mistakenly confuse them with, the Snatch and the Wolf are different vehicles. The most radical conversion of a Land Rover for military purposes was the Centaur half-track. It was based on a Series III with a V8 engine and a shortened belt drive from the Alvis Scorpion light tank. A small number was manufactured, and they were used by Ghana, among others. The Land Rover is used by military forces throughout the world. The current generation of Land Rover used by British Army, the Snatch 2, have upgraded and strengthened chassis and suspension compared to civilian-specification vehicles. There is also the Land Rover WMIK (weapon mounted installation kit) used by British Army. The WMIK consists of a driver, a raised gun, usually a Browning heavy machine gun or a grenade machine gun, this used for ground support, and a GPMG (general-purpose machine gunner) located next to the driver, this used for vehicle protection. Competitive use Highly modified Land Rovers have competed in the Dakar Rally and won the Macmillan 4x4 UK Challenge almost every year, as well as having been the vehicle used for the Camel Trophy. Now, Land Rover has its own G4 challenge. Driver training Land Rover Experience was established in 1990, and consists of a network of centres throughout the world, set up to help customers get the most out of their vehicles' on and off-road capability. The flagship centres are Land Rover's bases at Solihull, Eastnor, Gaydon and Halewood. Courses offered include off-road driving, winching and trailer handling, along with a variety of corporate and individual 'Adventure Days'. The factory centres at Solihull and Halewood have manufacturing tours, while Gaydon has an engineering tour. Safety Model-by-model road accident statistics from the UK Department for Transport show that the Land Rover Defender is one of the safest cars on British roads as measured by chance of death in two-car injury accidents. The figures, which were based on data collected by police forces following accidents between 2000 and 2004 in Great Britain, showed that Defender drivers had a 1% chance of being killed or seriously injured and a 33% chance of sustaining any kind of injury. Other four-wheel-drive vehicles scored equally highly, and collectively these vehicles were much safer for their passengers than those in other classes such as passenger cars and MPVs. These figures reflect the fact that drivers of large mass vehicles are likely to be safer, often | Discovery Sport is the successor to the brands Freelander model, which was Europe's best selling 4x4 for five years in a row, after its market introduction in 1997. Models Historic Series I, II, IIA and III Freelander (sold in some markets as LR2) Current Defender Discovery Discovery Sport Range Rover Range Rover Sport Range Rover Velar Range Rover Evoque Concepts Range Stormer – Land Rover's first concept vehicle, unveiled at the 2004 North American International Auto Show, later became the Range Rover Sport.(Gritzinger, 2004). Land Rover LRX – Land Rover's second concept vehicle, first unveiled at the 2008 Detroit Auto Show. Originally a vehicle with ERAD technology, the production version did not include this. The car was then launched in 2011 as the Range Rover Evoque, and was the first Range Rover branded product to be offered with front wheel drive, and no low ratio transfer box. Land Rover DC100 – Land Rover's third concept vehicle, first unveiled at the 2011 Frankfurt Auto Show, designed to be a replacement for the Land Rover Defender, though it is unlikely that the Defender's replacement will be exactly the same as the DC100 concept. Land Rover Discovery Vision Concept – Land Rover's fourth concept vehicle, first unveiled at the 2014, was designed to be a replacement for the Land Rover Discovery, This concept features Transparent Bonnet, Suicide doors, and Laser assisted lamps (there is a very little chance this will be included in any future production vehicles). Military Models developed for the UK Ministry of Defence (MoD) include: 101 Forward Control – also known as the "Land Rover One Tonne FC" 1/2 ton Lightweight – airportable military short-wheelbase from the Series 2a Land Rover Wolf – an uprated Military Defender Snatch Land Rover – Land Rover with composite armoured body in UK Armed Forces Service 109 Series IIa and III ambulance (body by Marshalls of Cambridge) Range Rover '6×6' Fire Appliance (conversion by Carmichael and Sons of Worcester) for RAF airfield use 130 Defender ambulance 'Llama' prototypes for 101 replacement Models developed for the Australian Army Land Rover Perentie 4×4 and 6×6 Engines During the history of the Land Rover many different engines have been fitted: The inlet-over-exhaust petrol engines ("semi side-valve"), in both four- and six-cylinder variants, which were used for the very first Land Rovers in 1948, and which had their origins in pre-war Rover cars. Displacement of the first models was 1,600 cc. The four-cylinder overhead-valve engines, both petrol and diesel, which first appeared (in diesel form) in 1957, near the end of Series One production, and evolved over the years to the 300 TDi turbodiesel, which remains in production today for some overseas markets. The Buick-sourced all aluminium Rover V8 engine. 1,997 cc Petrol, inlet-over-exhaust: Series I engine, carried over for the first few months of Series II production. 2,052 cc Diesel, overhead-valve: Land Rover's first diesel engine, and one of the first small high-speed diesels produced in the UK. It appeared in 1957, and was used in Series II production until 1961. Looks almost identical to the later 2,286 cc engine, but many internal differences. It produced . 2,286 cc Petrol, overhead-valve, three-bearing crank: 2,286 cc Diesel, overhead-valve, three-bearing crank: Appeared in 1961 alongside the redesigned 2,286 cc petrol engine at the start of Series IIA production, and shared its cylinder block and some other components. It produced . 2,625 cc Petrol, inlet-over-exhaust: Borrowed from the Rover saloon range, in response to demands from mid-1960s Land Rover users for more power and torque. 2,286 cc petrol/diesel, overhead-valve type 11J: five-bearing crank: In 1980, Land Rover finally did something about the crank failures which had plagued its four-cylinder engines for 22 years. These engines lasted beyond the end of Series III production and into the first couple of years of the new Ninety and One Ten ranges. 3,258 cc V8 Petrol: The ex-Buick all alloy V8 engine appeared in the Range Rover right from the start of production in 1970, but did not make its way into the company's utility vehicles until 1979. 2,495 cc petrol, overhead valve: The final development of Land Rover's ohv petrol 'four', with hardened valve seats which allow running on unleaded (or LPG). 2,495 cc diesel, overhead valve, type 12J: Land Rover reworked the old 'two and a quarter' diesel for the 1980s. The injection pump was driven off a toothed belt at the front of the engine (together with the camshaft), a change compared with the older diesels. 2,495 cc turbodiesel, overhead valve, type 19J 2,495 cc turbodiesel, overhead valve, 200TDi and 300TDi: Used in the Defender and Discovery from 1990. The cylinder block was similar to the previous engine, although strengthened but the cylinder head was all-new and a direct injection fuel system was used. 2,495 cc turbodiesel, five-cylinder, TD5: An all-new engine for the second generation Discovery, and the Defender featuring electronic control of the fuel injection system, 'drive by wire' throttle, and other refinements The original Freelander models were available with various Rover K-series engines. In beginning of 2015 they start to use the all new Ingenium engine family, to replaced Ford sourced engines. , most Land Rovers in production are powered by Ford engines. Under the terms of the acquisition, Tata has the right to buy engines from Ford until 2019. Electric vehicles Integrated Electric Rear Axle Drive (ERAD) technology, dubbed e-terrain technology, will allow the vehicle to move off without starting the engine as well as supplying extra power over tough terrain. Land Rover's Diesel ERAD Hybrid was developed as part of a multimillion-pound project supported by the UK Government's Energy Saving Trust, under the low carbon research and development programme. ERAD programme is one of a broad range of sustainability-focused engineering programmes that Land Rover is pursuing, brought together by the company under the collective name "e TERRAIN Technologies". Land Rover presented at the 2008 London Motor Show its new ERAD diesel-electric hybrid in a pair of Freelander 2 (LR2) prototypes. The new hybrid system is being designed as a scalable and modular system that could be applied across a variety of Land Rover models and powertrains. Land Rover unveiled the LRX hybrid concept at the 2008 North American International Auto Show in Detroit, for it to be going into production. An ERAD will enable the car to run on electric power at speeds below . In September 2011, the Range Rover Evoque was launched, though it was based on the LRX hybrid concept presented at the 2008 North American International Auto Show, it did not include the ERAD system, included in the original concept. In February 2013, Land Rover unveiled at the 83rd Geneva Motor Show an All-Terrain Electric Defender that produces zero emissions. The electric vehicle was developed for research purposes following successful trials of the Defender-based electric vehicle, Leopard 1. The vehicle is capable of producing 70kW and 330Nm of torque and has a range of 80 kilometres or in low speed off-road use it can last for up to eight hours before recharging. Abilities Power take-off (PTO) was integral to the Land Rover concept from 1948, enabling farm machinery and many other items to be run with the vehicle stationary. Maurice Wilks' original instruction was "...to have power take-offs everywhere!" The 1949 report by British National Institute of Agricultural Engineering and Scottish Machinery Testing Station contained this description: "the power take-off is driven through a Hardy Spicer propeller shaft from the main gearbox output and two interchangeable pinions giving two ratios. The PTO gearbox casing is bolted to the rear chassis cross-member and an belt pulley driven from the PTO shaft through two bevel gears can be bolted to the PTO gearbox casing." PTOs remained regular options on Series I, II and III Land Rovers up to the demise of the Series Land Rover in 1985. An agricultural PTO on a Defender is possible as a special order. Land Rovers (the Series/Defender models) are available in a variety of body styles, from a simple canvas-topped pick-up truck to a twelve-seat fully trimmed station wagon. Both Land Rover and out-of-house contractors have offered conversions and adaptations to the basic vehicle, such as fire engines, excavators, 'cherry picker' hydraulic platforms, ambulances, snowploughs, and six-wheel-drive versions, as well as one-off special builds including amphibious Land Rovers and vehicles fitted with tracks instead of wheels. Military use Various Land Rover models have been used in a military capacity, most notably by the British Army and Australian Army. Modifications may include military "blackout" lights, heavy-duty suspension, uprated brakes, 24 volt electrics, convoy lights, electronic suppression of the ignition system, blackout curtains and mounts for special equipment and small arms. Dedicated military models have been produced such as the 101 Forward Control and the air-portable 1/2 ton Lightweight. Military uses include light utility vehicle; communications platform; weapon platform for recoilless rifles, Anti-tank (e.g. TOW or M40 recoilless rifle) / Surface-to-Air Guided Weapons or machine guns; ambulances and workshops. The Discovery has also been used in small numbers, mostly as liaison vehicles. Two models that have been designed for military use from the ground up are the 101 Forward Control from the early 1970s and the Lightweight or Airportable from the late 1960s. The latter was intended to be transported under a helicopter. The Royal Air Force Mountain Rescue Service (RAFMRS) teams were early users in the late 1950s and early 1960s, and their convoys of Land Rovers and larger military trucks are a sight often seen in the mountain areas of the United Kingdom. Originally RAFMRS Land Rovers had blue bodies and bright yellow tops, to be better seen from above. In 1981, the colour scheme was changed to green with yellow stripes. More recently, vehicles have been painted white, and are issued with fittings similar to civilian UK Mountain Rescue teams. An adaptation of Land Rovers to military purposes is the "Pink Panther" models. Approximately 100 Series IIA models were adapted to reconnaissance use by British special operations forces the SAS. For desert use they were often painted pink, hence the name. The vehicles were fitted with among other gear a sun compass, machine guns, larger fuel tanks and smoke dischargers. Similar adaptations were later made to Series IIIs and 90/110/Defenders. The Australian Army adapted the Land Rover Series 2 into the Long Range Patrol Vehicle for use by the Special Air Service Regiment and as an anti-tank "gunbuggy" fitted with an M40 recoilless rifle. The 75th Ranger Regiment of the United States Army also adapted twelve versions of the Land Rover that were officially designated the Ranger Special Operations Vehicle. Series and Defender models have also been armoured. The most widespread of these is the Shorts Shorland, built by Shorts Brothers of Belfast. The first of these were delivered in 1965 to the Royal Ulster Constabulary, the Northern Ireland police force. They were originally wheelbase models with an armoured body and a turret from the Ferret armoured car. By 1990, there had been more than 1,000 produced. In the 1970s, a more conventional armoured Land Rover was built for the Royal Ulster Constabulary in Wales called the Hotspur. The Land Rover Tangi was built by the Royal Ulster Constabulary's own vehicle engineering team during the 1990s. The British Army has used various armoured Land Rovers, first in Northern Ireland but also in more recent campaigns. They first added protective panels to Series General Service vehicles (the Vehicle Protection Kit (VPK)). Later they procured the Glover Webb APV and finally the Courtaulds (later NP Aerospace) Composite Armoured Vehicle, commonly known as Snatch. These were originally based on heavy-duty V8 110 chassis but some have recently been re-mounted on new chassis from Otokar of Turkey and fitted with diesel engines and air-conditioning for Iraq. Although these now have more in common with the 'Wolf' (Defender XD) Land Rovers that many mistakenly confuse them with, the Snatch and the Wolf are different vehicles. The most radical conversion of a Land Rover for military purposes was the Centaur half-track. It was based on a Series III with a V8 engine and a shortened belt drive from the Alvis Scorpion light tank. A small number was manufactured, and they were used by Ghana, among others. The Land Rover is used by military forces throughout the world. The current generation of Land Rover used by British Army, the Snatch 2, have upgraded and strengthened chassis and suspension compared to civilian-specification vehicles. There is also the Land Rover WMIK (weapon mounted installation kit) used by British Army. The WMIK consists of a driver, a raised gun, usually a Browning heavy machine gun or a grenade machine gun, this used for ground support, and a GPMG (general-purpose machine gunner) located next to the driver, this used for vehicle protection. Competitive use Highly modified Land Rovers have competed in the Dakar Rally and won the Macmillan 4x4 UK Challenge almost every year, as well as having been the vehicle used for the Camel Trophy. Now, Land Rover has its own G4 challenge. Driver training Land Rover Experience was established in 1990, and consists of a network of centres throughout the world, set up to help customers get the most out of their vehicles' on and off-road capability. The flagship centres are Land Rover's bases at Solihull, Eastnor, Gaydon and Halewood. Courses offered include off-road driving, winching and trailer handling, along with a variety of corporate and individual 'Adventure Days'. The factory centres at Solihull and Halewood have manufacturing tours, while Gaydon has an engineering tour. Safety Model-by-model road accident statistics from the UK Department for Transport show that the Land Rover Defender is one of the safest cars on British roads as measured by chance of death in two-car injury accidents. The figures, which were based on data collected by police forces following accidents between 2000 and 2004 in Great Britain, showed that Defender drivers had a 1% chance of being killed or seriously injured and a 33% chance of sustaining any kind of injury. Other four-wheel-drive vehicles scored equally highly, and collectively these vehicles were much safer for their passengers than those in other classes such as passenger cars and MPVs. These figures reflect the fact that drivers of large mass vehicles are likely to be safer, often at the expense of other drivers if they collide with smaller cars. Clubs The original Land Rover Owners Club was set up by the Rover Company in 1954. The company published the Land Rover Owners Club Review magazine for members from 1957 to 1968 when the club became the Rover Owners Association. This original association fell away when the company merged with British Leyland. There are many Land Rover clubs throughout the UK and internationally. Land Rover clubs break down into a number of groups of varying interests. Single Marque |
additive. INS numbers generally correspond to E numbers for the same compound, e.g. INS 102, Tartrazine, is also E102. INS numbers are not unique and, in fact, one number may be assigned to a group of similar compounds. List of INS numbers Except where stated, the list of INS numbers and associated food additives is based on the most recent publication of the Codex Alimentarius, Class Names and the International Numbering System for Food Additives, first published in 1989, with revisions in 2008 and 2011. E number and American approval flags are derived from other sources. In the table below, food additives approved for the EU are listed with an 'E', and those approved for Australia and New Zealand with an 'A'. and for the US with a U, even though the US does not use the INS numbering system. See also Codex Alimentarius Codex Alimentarius Austriacus E number | and New Zealand do not use a prefix letter when listing additives in the ingredients. An additive that appears in the INS does not automatically have a corresponding E number. INS numbers are assigned by the committee to identify each food additive. INS numbers generally correspond to E numbers for the same compound, e.g. INS 102, Tartrazine, is also E102. INS numbers are not unique and, in fact, one number may be assigned to a group of similar compounds. List of INS numbers Except where stated, the list of INS numbers and associated food additives is based on the most recent publication of the Codex Alimentarius, Class Names and the International Numbering System for Food Additives, first published in 1989, with revisions in 2008 and 2011. E number and American approval flags are derived from other sources. In the table below, food additives approved for the EU are listed with an 'E', and |
concentration or infinite dilution, this results in the following relation: Derivation Consider a binary electrolyte AB which dissociates reversibly into A+ and B− ions. Ostwald noted that the law of mass action can be applied to such systems as dissociating electrolytes. The equilibrium state is represented by the equation: AB <=> {A+} + B^- If is the fraction of dissociated electrolyte, then is the concentration of each ionic species. must, therefore be the fraction of undissociated electrolyte, and the concentration of same. The dissociation constant may therefore be given as For very weak electrolytes (however, neglecting 'α' for most weak electrolytes yields counterproductive result) , implying that . This gives the following results; Thus, the degree of dissociation of a weak electrolyte is proportional to the inverse square root of the concentration, or the square root of the dilution. The concentration of any one ionic species is given by the root of the product of the dissociation constant and the concentration of the electrolyte. Limitations The Ostwald law of dilution provides a satisfactory description of | dissociation of a weak electrolyte. The law takes the form Where the square brackets denote concentration, and is the total concentration of electrolyte. Using , where is the molar conductivity at concentration c and is the limiting value of molar conductivity extrapolated to zero concentration or infinite dilution, this results in the following relation: Derivation Consider a binary electrolyte AB which dissociates reversibly into A+ and B− ions. Ostwald noted that the law of mass action can be applied to such systems as dissociating electrolytes. The equilibrium state is represented by the equation: AB <=> {A+} + B^- If is the fraction of dissociated electrolyte, then is the concentration of each ionic species. must, therefore be the fraction of undissociated electrolyte, and the concentration of same. The dissociation constant may therefore be given as For very weak electrolytes (however, neglecting 'α' for most weak electrolytes yields counterproductive result) , implying that . This gives the following results; Thus, the degree of dissociation of |
Panera is a contemporary art institution. The Museu d'Art Jaume Morera displays art from the 20th and 21st centuries (as well as artwork by its namesake). The city has a number of small municipal galleries, such as the Sala Municipal d'Exposicions de Sant Joan and the Sala Manel Garcia Sarramona. There are also several institutions dedicated to local artists, such as the Sala Leandre Cristòfol, containing artwork by the sculptor and painter Leandre Cristòfol (1908–1998); and the Sala Coma Estadella, dedicated to the sculptor and painter Albert Coma Estadella (1933–1991). Private art galleries include the Espai Cavallers. The private foundation CaixaForum Lleida and the Public Library of Lleida also offer regular exhibits. The now defunct Petite Galerie was an innovative and influential gallery in the 1970s. The Escola Municipal de Belles Arts provides higher education in the arts. Traditional culture Traditional celebrations include the main annual town festivity: Festa Major; Fira de Sant Miquel and L’Aplec del Caragol (escargot-eating festival, the biggest in the world of this sort, held at the Camps Elisis since 1980). The latter is a gastronomical festivity focused on escargot cooking and is celebrated yearly at the end of May. "L'Aplec" gathers thousands of people around the table to taste the most traditional dishes from Lleida. Due to its strong popularity, it was declared a traditional festivity of national interest in 2002 by the Generalitat of Catalonia and two years later it was also declared as such by the Spanish Government. The main traditional celebrations in Lleida are chaired by the twelve emblematic "Gegants de la Paeria" (Giants of the Town Hall), the two oldest made in 1840. Nightlife Lleida has a bar and clubbing area, informally known as Els Vins. The oldest part of the quarter, known as Els Vins Vells, has been largely replaced by Els Vins Nous, an architecturally newer and more upscale area. Most big clubs in Lleida are located outside the town and are not easily accessible without a car, though on Saturday nights there is a bus. Main sights Seu Vella, a cathedral built in a blend of Romanesque and Gothic styles over time, and made a military fortress in the 18th century. There is also an older, and mostly destroyed Palau de la Suda, built during Arab rule and later used as a royal residence by the counts of Barcelona and kings of Aragon. Both medieval buildings are situated over the so-called Turó de la Seu, a medium-sized hill that overlooks the town. Seu Nova, the baroque cathedral, in use since Bourbon rule. It was burnt during the Spanish Civil War by the anarchists commanded by Durruti. Institut d'Estudis Ilerdencs, used to be a hospital (Antic Hospital de Santa Maria) built in Gothic style, but today is a historical museum and research centre open to visitors, with historically significant artworks and artefacts from the Iberian, Roman, Arab, medieval and modern times, as well as an exhibit area usually showcasing contemporary local artists. La Paeria, the city council and also a historical site with remains and artefacts from Roman times through to the Moorish rule, Mediaeval and Modern times, including old prison cells. Gardeny is a hill hosting a fortress built between the 12th and 13th centuries. Used by the Knights Templar in the Middle Ages after the area (a fifth of the town) had been granted to them by King Ramon Berenguer IV. The gardens known as Camps Elisis, already used by the Romans. The Mermaid Fountain is a nice piece. La Mitjana, a park at the edge of town with wilderness areas adjacent to an old dam on the river Segre. Les Basses d'Alpicat, a park. It is currently closed, awaiting reforms. Church of Sant Llorenç, a 12th-century Romanesque church with 15th-century Gothic additions. The interior is well preserved. Church of Sant Martí, a 12th-century Romanesque church. The bishop of Lleida’s Palace on Rambla d'Aragó, which also serves as an art museum displaying pieces spanning from Romanesque to Baroque times. El Roser, a 13th-century convent built by the Dominican Order. It hosted a fine arts academy of the same name and has recently been controversially reformed and turned into a parador (a luxury hotel using a historical location). Lleida Public Library, on Rambla d'Aragó, in the building previously known as La Maternitat, a mid-19th century orphanage. Museum of Lleida, opened in 2008, and owned by the Diocese of Lleida focusing on the town's history. Some of the artefacts it contains, which come from areas historically belonging to the diocese but not currently part of the province of Lleida's territory and jurisdiction, have been the object of contention with the neighbouring dioceses and the government of the autonomous community of Aragon. Sala Cristòfol, a museum devoted to the works of the avant-garde sculptor Leandre Cristòfol. Sala Mercat del Pla, an art gallery. Museu d'Art Jaume Morera, an art museum displaying art from the 20th and 21st centuries in a modernist building. Centre d'Art de la Panera, a small contemporary art institution. Museu de l'Aigua, in the Parc de l'aigua. Auditori Enric Granados, Lleida's foremost concert hall. Next to its basement and on public display are some ancient ruins. La Llotja de Lleida, a concert hall, theatre, opera and congress hall opened in 2010. Parc de l'aigua, urban park in the southern neighborhoods. Sports Unió Esportiva Lleida, based on the Camp d'Esports CE Lleida Bàsquet, based on the Pavelló Barris Nord Sister cities Lleida has sister relationships with many places worldwide: Ferrara, Italy Foix, France Hefei, China Lérida, Colombia Perpignan, France Monterey, California, United States<ref name= References in culture The city is the subject of the Catalan folk song La Presó de Lleida, "The prison of Lleida", which was already attested in the 17th century and may be even older. It is a very popular tune, covered by many artists such as Joan Manuel Serrat. See also Battle of Ilerda Diocese of Lleida, Bishop of Lleida. Talarn Dam University of Lleida Volta a Lleida References Bibliography External links Tourism information of Lleida Internet Portal of | growth, it met a massive migration of Andalusians who helped the town undergo a relative demographic growth. Nowadays it is home to immigrants of 146 different nationalities. During 2007 Lleida was the year's Capital of Catalan Culture. Climate Lleida has a temperate semi-arid climate (Köppen BSk). Winters are mild and foggy though cooler than places on the coast while summers are hot and dry. Frosts are common during winter although snowfall can occasionally fall, averaging 1 or 2 days. Precipitation is low, with an annual average of with a peak in April and May and another peak in September and October. Districts and neighbourhoods Lleida is divided in the following districts by the Observatori Socioeconòmic de Lleida: Balàfia Les Basses d'Alpicat La Bordeta Butsènit Camp d'Esports Cappont Centre Històric Ciutat Jardí Humbert Torres Instituts-Templers Joc de la Bola Llívia Magraners Mariola Pardinyes Príncep de Viana-Clot Rambla Ferran-Estació Secà de Sant Pere Torres de Sanuí Universitat Transport Railway Lleida is served by Renfe's Madrid-Barcelona high-speed rail line, serving Barcelona, Zaragoza, Calatayud, Guadalajara, and Madrid. Lleida has a new airport opened in January 2010, and a minor airfield located in Alfès. Also, the town is the western terminus of the Eix Transversal Lleida-Girona, and a railway covering the same distance (Eix Transversal Ferroviari) is currently under planning. Lleida's only passenger railway station is Lleida Pirineus. It is served by both Renfe and Ferrocarrils de la Generalitat de Catalunya train lines. In the future a Rodalies Lleida commuter network will connect the town with its adjacent area and the main towns of its province, improving the existing network with more train frequency and newly built infrastructure. A second railway station is Pla de la Vilanoveta in an industrial area, and only used by freight trains. A future railway museum will be located in its facilities. Since 2008 the bulk of public transport of the Lleida's surrounding area, mainly buses operated by several companies, is managed by Autoritat Territorial de la Mobilitat de l'Àrea de Lleida. Bus The urban buses, coloured yellow with blue stripes and owned by Autobusos de Lleida, include the following lines: L-1 Interior L-2 Ronda L-3 Pardinyes L-4 Mariola – L-5 Bordeta L-6 Magraners L-7 Secà L-8 Balàfia-Gualda L-9 Hospitals L-10 Exterior L-11 Llívia-Caparrella L-11B Llívia-Caparrella-Butsenit L-12 C.Històric-Universitat L-13 Cappont L-14 Agrònoms L-P Polígons L-17 Bordeta-Ciutat Jardí L-18 Palau de Congressos- Rambla de la Mercé L-19 Butsenit L-N Wonder (Regular night service) L-Bus Turístic (tourist bus) L- Aeroport L- Llotja In addition to these, there's a tourist bus and a regular night service to nearby clubs. Lleida-Pirineus airport Lleida has depended long time on nearby airports and had no local air transit. Lleida-Alguaire airport opened in 2010. Future and planned services A tram-train system is pending approval. Using an existing but outdated passenger line, it would link Balaguer and Lleida, crossing both towns in a much needed move towards better public transportation, both inner-city and between localities. Languages Lleida is a traditionally Catalan-speaking city and province, with a characteristic dialect (known as Western or, more specifically, North-Western Catalan, or colloquially lleidatà). Most of the population is actively bilingual in Spanish. Culture Lleida was the Capital of Catalan Culture in 2007. Theatre and music venues Enric Granados Auditorium is the city's concert hall and main music institution and conservatory. It is named after the composer Enric Granados, who was born in the city. CaixaForum Lleida (formerly known as Centre Cultural de la Fundació La Caixa) includes a concert hall. Teatre Municipal de l'Escorxador is the town's main theatre; it includes a concert venue, Cafè del Teatre. A theatre and congress centre, La Llotja de Lleida, opened in 2010. Music festivals There are two important music festivals in Lleida; MÚSIQUES DISPERSES Folk Festival in March, and the jazz festival JAZZ TARDOR in November. Concerts are also a regular fixture of the two local feasts, Sant Anastasi in May, and Sant Miquel in September. Film CaixaForum Lleida is the usual venue for film-related events and screenings. A Latin-American film festival is held yearly in the town (Mostra de Cinema Llatinoamericà de Lleida), and an animation film festival called Animac is held every May. Art and museums The Lleida Museum opened in 2008 and displays historical artefacts and works of art from various periods. The Institut d'Estudis Ilerdencs, a historically relevant building, exhibits both ancient and contemporary art. The Centre d'Art La Panera is a contemporary art institution. The Museu d'Art Jaume Morera displays art from the 20th and 21st centuries (as well as artwork by its namesake). The city has a number of small municipal galleries, such as the Sala Municipal d'Exposicions de Sant Joan and the Sala Manel Garcia Sarramona. There are also several institutions dedicated to local artists, such as the Sala Leandre Cristòfol, containing artwork by the sculptor and painter Leandre Cristòfol (1908–1998); and the Sala Coma Estadella, dedicated to the sculptor and painter Albert Coma Estadella (1933–1991). Private art galleries include the Espai Cavallers. The private foundation CaixaForum Lleida and the Public Library of Lleida also offer regular exhibits. The now defunct Petite Galerie was an innovative and influential gallery in the 1970s. The Escola Municipal de Belles Arts provides higher education in the arts. Traditional culture Traditional celebrations include the main annual town festivity: Festa Major; Fira de Sant Miquel and L’Aplec del Caragol (escargot-eating festival, the biggest in the world of this sort, held at the Camps Elisis since 1980). The latter is a gastronomical festivity focused on escargot cooking and is celebrated yearly at the end of May. "L'Aplec" gathers thousands of people around the table to taste the most traditional dishes from Lleida. Due to its strong popularity, it was declared a traditional festivity of national interest in 2002 by the Generalitat of Catalonia and two years later it was also declared as such by the Spanish Government. The main traditional celebrations in Lleida are chaired by the twelve emblematic "Gegants de la Paeria" (Giants of the Town Hall), the two oldest made in 1840. Nightlife Lleida has a bar and clubbing area, informally known as Els Vins. The oldest part of the quarter, known as Els Vins Vells, has been largely replaced by Els Vins Nous, an architecturally newer and more upscale area. Most big clubs in Lleida are located outside the town and are not easily accessible |
and "that", and was replaced by le, which means "the". As the principal city of Maine, Le Mans was the stage for struggles in the eleventh century between the counts of Anjou and the dukes of Normandy. When the Normans had control of Maine, William the Conqueror successfully invaded England and established an occupation. In 1069 the citizens of Maine revolted and expelled the Normans, resulting in Hugh V being proclaimed count of Maine. Geoffrey V of Anjou married Matilda of England in the cathedral. Their son Henry II Plantagenet, king of England, was born here. In 1154, during the reign of his uncle King Stephen, Henry landed in England with an army, intent on challenging Stephen for the throne. Some of the members of that feudal force were known by the surname 'del Mans' (Latin for of Mans, as the city was then known.) In medieval records pertaining to the history of Gloucester is a reference to one such man, Walter del Mans, and beside his name 'Cenomanus' was added by the medieval scribe, so that there is no doubt as to Walter's origin. In the English censuses down to the twentieth century the surname Mans (latterly often spelled Manns) was virtually confined to the counties of Gloucestershire and Herefordshire and their borderlands, reflecting the original settlement patterns in the Welsh Marches of the original followers of Henry's from Le Mans in 1154. A John Mans/Manns was escheator of Hereford 1399–1400. One family from [Le] Mans held the manor of Dodenham, Worcestershire. (Calendar of the Records of the Corporation of Gloucester, Item 96, ca.1200; Fine Roles Henry III, 23 August. 1233 [Hereford];'Parishes: Doddenham', A History of the County of Worcester, volume 4 (1924), pp. 260–62.) Intercourse between England and Le Mans continued throughout the Angevin period. In the 13th century Le Mans came under the control of the French crown. It was subsequently invaded by England during the Hundred Years' War. Industrialization took place in the 19th century which saw the development of railway and motor vehicle production as well as textiles and tobacco manufacture. Wilbur Wright began official public demonstrations of the airplane he had developed with his younger brother Orville on 8 August 1908, at the Hunaudières horse racing track near Le Mans. World War II Soon after Le Mans was liberated by the U.S. 79th and 90th Infantry Divisions on 8 August 1944, engineers of the Ninth Air Force IX Engineering Command began construction of a combat Advanced Landing Ground outside of the town. The airfield was declared operational on 3 September and designated as "A-35". It was used by several American fighter and transport units until late November of that year in additional offensives across France; the airfield was closed. Main sights Le Mans has a well-preserved old town (Cité Plantagenêt, also called Vieux Mans) and the Cathédrale St-Julien, dedicated to St Julian of Le Mans, who is honoured as the city's first bishop. Remnants of a Roman wall are visible in the old town and Roman baths are located by the river. These walls are highlighted every summer (July and August) evening in a light show that tells the history of the town. Arboretum de la Grand Prée Notre-Dame de la Couture, medieval church Notre Dame de Sainte Croix, neogothic church Part of the former Cistercian abbey de l'Epau, founded by Queen Berengaria and currently maintained in extensive grounds by the Département de la Sarthe. Jardin des Plantes du Mans Musée de la reine Bérengère, a museum of Le Mans history located in a gothic manor house. Musée de Tessé, the fine arts museum of the city, displaying painting (including artworks by Philippe de Champaigne, Charles Le Brun, François Boucher, John Constable, Ingres, Théodore Géricault and Camille Corot) and archaeological collections as well as decorative arts. Gallery Climate Le Mans has an oceanic climate influenced by the mild Atlantic air travelling inland. Summers are warm and occasionally hot, whereas winters are mild and cloudy. Precipitation is relatively uniform and moderate year round. Demographics , there were 347,626 inhabitants in the metropolitan area (aire urbaine) of Le Mans, with 142,946 of these living in the city proper (commune). In 1855 Le Mans absorbed four neighbouring communes. The population data for 1851 and earlier in the table and graph below refer to the pre-1855 borders. Transportation The Gare du Mans is the main railway station of Le Mans. It takes 1 hour to reach Paris from Le Mans by TGV high speed train. There are also TGV connections to Lille, Marseille, Nantes, Rennes and Brest. Gare du Mans is also a hub for regional trains. Le Mans inaugurated a new light rail system on 17 November 2007. Sport Motorsport The first French Grand Prix took place on a 64-mile (103 km) | the French crown. It was subsequently invaded by England during the Hundred Years' War. Industrialization took place in the 19th century which saw the development of railway and motor vehicle production as well as textiles and tobacco manufacture. Wilbur Wright began official public demonstrations of the airplane he had developed with his younger brother Orville on 8 August 1908, at the Hunaudières horse racing track near Le Mans. World War II Soon after Le Mans was liberated by the U.S. 79th and 90th Infantry Divisions on 8 August 1944, engineers of the Ninth Air Force IX Engineering Command began construction of a combat Advanced Landing Ground outside of the town. The airfield was declared operational on 3 September and designated as "A-35". It was used by several American fighter and transport units until late November of that year in additional offensives across France; the airfield was closed. Main sights Le Mans has a well-preserved old town (Cité Plantagenêt, also called Vieux Mans) and the Cathédrale St-Julien, dedicated to St Julian of Le Mans, who is honoured as the city's first bishop. Remnants of a Roman wall are visible in the old town and Roman baths are located by the river. These walls are highlighted every summer (July and August) evening in a light show that tells the history of the town. Arboretum de la Grand Prée Notre-Dame de la Couture, medieval church Notre Dame de Sainte Croix, neogothic church Part of the former Cistercian abbey de l'Epau, founded by Queen Berengaria and currently maintained in extensive grounds by the Département de la Sarthe. Jardin des Plantes du Mans Musée de la reine Bérengère, a museum of Le Mans history located in a gothic manor house. Musée de Tessé, the fine arts museum of the city, displaying painting (including artworks by Philippe de Champaigne, Charles Le Brun, François Boucher, John Constable, Ingres, Théodore Géricault and Camille Corot) and archaeological collections as well as decorative arts. Gallery Climate Le Mans has an oceanic climate influenced by the mild Atlantic air travelling inland. Summers are warm and occasionally hot, whereas winters are mild and cloudy. Precipitation is relatively uniform and moderate year round. Demographics , there were 347,626 inhabitants in the metropolitan area (aire urbaine) of Le Mans, with 142,946 of these living in the city proper (commune). In 1855 Le Mans absorbed four neighbouring communes. The population data for 1851 and earlier in the table and graph below refer to the pre-1855 borders. Transportation The Gare du Mans is the main railway station of Le Mans. It takes 1 hour to reach Paris from Le Mans by TGV high speed train. |
Los Angeles Science Fantasy Society. She has won two Hugo Awards for Best Series, in 2017 for the Vorkosigan Saga and in 2018 for the World of the Five Gods series. The Science Fiction Writers of America named her its 36th SFWA Grand Master in 2019. The bulk of Bujold's works comprises three separate book series: the Vorkosigan Saga, the Chalion series, and the Sharing Knife series. Biography Bujold is the daughter of Robert Charles McMaster and attributes her early interest in science fiction, as well as certain aspects of the Vorkosigan Saga, to his influence. He was editor of the Nondestructive Testing Handbook. Bujold writes that her experience growing up with a famous father is reflected in the same experience that her characters (Miles, Fiametta) have of growing up in the shadow of a "Great Man". Having observed this tendency in both genders, she wonders why it is always called "great man's son syndrome", and never "great man's daughter's syndrome." Her brother, an engineer like their father, helped provide technical details to support her writing of Falling Free. She has stated that she was always a "voracious reader". She started reading adult science fiction at the age of nine, picking up the habit from her father. She became a member of science fiction fandom, joined the Central Ohio Science Fiction Society, and co-published StarDate, a science fiction fanzine in which a story of hers appeared under the byline Lois McMaster. Her reading tastes later expanded and she stated she now reads "history, mysteries, romance, travel, war, poetry, etc". She attended Ohio State University from 1968 to 1972. While she was interested in writing, she didn't pursue an English major, feeling it was too concerned with literary criticism instead of literary creation. She married John Fredric Bujold in 1971, but they divorced in the early 1990s. The marriage produced two children, a daughter named Anne (born 1979) and a son named Paul (born 1981). Anne Bujold is currently (January 2020) Artist-In-Residence for the Metals Department at the Appalachian Center for Craft, a campus of Tennessee Tech; formerly she was a metal artist and welder in Portland, Oregon and vice president of the Northwest Blacksmith Association. Bujold currently lives in Minneapolis, Minnesota. Inspiration Bujold had been friends with Lillian Stewart Carl since high school, where they "collaborated on extended story lines [but where] only a fragment of the total was written out.". At one point, she even co-produced a Star Trek zine called StarDate which she wrote for. In college, she wrote a Sherlock Holmes mystery as well. However, she stopped writing after that, being busy with marriage, family, and a career in hospital patient care. It wasn't until her thirties that she returned to writing. Bujold has | famous father is reflected in the same experience that her characters (Miles, Fiametta) have of growing up in the shadow of a "Great Man". Having observed this tendency in both genders, she wonders why it is always called "great man's son syndrome", and never "great man's daughter's syndrome." Her brother, an engineer like their father, helped provide technical details to support her writing of Falling Free. She has stated that she was always a "voracious reader". She started reading adult science fiction at the age of nine, picking up the habit from her father. She became a member of science fiction fandom, joined the Central Ohio Science Fiction Society, and co-published StarDate, a science fiction fanzine in which a story of hers appeared under the byline Lois McMaster. Her reading tastes later expanded and she stated she now reads "history, mysteries, romance, travel, war, poetry, etc". She attended Ohio State University from 1968 to 1972. While she was interested in writing, she didn't pursue an English major, feeling it was too concerned with literary criticism instead of literary creation. She married John Fredric Bujold in 1971, but they divorced in the early 1990s. The marriage produced two children, a daughter named Anne (born 1979) and a son named Paul (born 1981). Anne Bujold is currently (January 2020) Artist-In-Residence for the Metals Department at the Appalachian Center for Craft, a campus of Tennessee Tech; formerly she was a metal artist and welder in Portland, Oregon and vice president of the Northwest Blacksmith Association. Bujold currently lives in Minneapolis, Minnesota. Inspiration Bujold had been friends with Lillian Stewart Carl since high school, where they "collaborated on extended story lines [but where] only a fragment of the total was written out.". At one point, she even co-produced a Star Trek zine called StarDate which she wrote for. In college, she wrote a Sherlock Holmes mystery as well. However, she stopped writing after that, being busy with marriage, family, and a career in hospital patient care. It wasn't until her thirties that she returned to writing. Bujold has credited her friend Lillian Stewart Carl's first book sales with inspiring her to return to the field: "it occurred to me that if she could do it, I could do it too." She originally planned to write as a hobby again, but discovered the amount of work required was too much for anything other than a profession, so she decided to turn professional. With support from Carl and Patricia Wrede, she was able to complete her first novel. Science fiction Lois Bujold wrote three books (Shards of Honor, The Warrior's Apprentice and Ethan of Athos) before The Warrior's Apprentice was finally accepted, after four rejections. The Warrior's Apprentice was the first book purchased, though not the first Vorkosigan book written, nor would it be the first one to be published. On the strength of The Warrior's Apprentice, Baen Books agreed to a three-book deal to include the two bracketing novels. By 2010, Baen Books claimed to have sold 2 million copies of Bujold's books. Bujold is best known for her Vorkosigan Saga, a series of novels featuring Miles Vorkosigan, a physically impaired interstellar spy and mercenary admiral from the planet Barrayar, set approximately 1000 years in the future. The series also includes prequels starring Miles' parents, along with companion novels centered on secondary characters. Earlier titles are generally firmly in the space opera tradition with no shortage of battles, conspiracies, and wild |
the presence of fat and by cooking. Lycopene dietary supplements (in oil) may be more efficiently absorbed than lycopene from food. Lycopene is not an essential nutrient for humans, but is commonly found in the diet mainly from dishes prepared from tomatoes. The median and 99th percentile of dietary lycopene intake have been estimated to be 5.2 and 123 mg/d, respectively. Sources Fruits and vegetables that are high in lycopene include autumn olive, gac, tomatoes, watermelon, pink grapefruit, pink guava, papaya, seabuckthorn, wolfberry (goji, a berry relative of tomato), and rosehip. Ketchup is a common dietary source of lycopene. Although gac (Momordica cochinchinensis Spreng) has the highest content of lycopene of any known fruit or vegetable (multiple times more than tomatoes), tomatoes and tomato-based sauces, juices, and ketchup account for more than 85% of the dietary intake of lycopene for most people. The lycopene content of tomatoes depends on variety and increases as the fruit ripens. Unlike other fruits and vegetables, where nutritional content such as vitamin C is diminished upon cooking, processing of tomatoes increases the concentration of bioavailable lycopene. Lycopene in tomato paste is up to four times more bioavailable than in fresh tomatoes. Processed tomato products such as pasteurized tomato juice, soup, sauce, and ketchup contain a higher concentration of bioavailable lycopene compared to raw tomatoes. Cooking and crushing tomatoes (as in the canning process) and serving in oil-rich dishes (such as spaghetti sauce or pizza) greatly increases assimilation from the digestive tract into the bloodstream. Lycopene is fat-soluble, so the oil is said to help absorption. Gac has high lycopene content derived mainly from its seed coats. Cara cara navel, and other citrus fruit, such as pink grapefruit, also contain lycopene. Some foods that do not appear red also contain lycopene, e.g., asparagus, which contains about 30 μg of lycopene per 100-g serving (0.3 μg/g) and dried parsley and basil, which contain around 3.5–7.0 μg/g of lycopene. When lycopene is used as a food additive (E160d), it is usually obtained from tomatoes. Adverse effects Lycopene is non-toxic and commonly found in the diet, mainly from tomato products. There are cases of intolerance or allergic reaction to dietary lycopene, which may cause diarrhea, nausea, stomach pain or cramps, gas, and loss of appetite. Lycopene may increase the risk of bleeding when taken with anticoagulant drugs. Because lycopene may cause low blood pressure, interactions with drugs that affect blood pressure may occur. Lycopene may affect the immune system, the nervous system, sensitivity to sunlight, or drugs used for stomach ailments. Lycopenemia is an orange discoloration of the skin that is observed with high intakes of lycopene. The discoloration is expected to fade after discontinuing excessive lycopene intake. Research and potential health effects A 2017 review concluded that tomato products and lycopene supplementation had small positive effects on cardiovascular risk factors, such as reduced blood lipids and blood pressure. A 2010 review concluded that research has been insufficient to establish whether lycopene consumption affects human health. In basic and clinical research for its potential effects on cardiovascular diseases and prostate cancer, there is insufficient evidence for any effect. A 2020 review of randomized controlled trials found conflicting findings for lycopene improving cardiovascular risk factors. Regulatory status in Europe and the United States In a review of literature on lycopene and its potential benefit in the diet, the European Food Safety Authority concluded there was insufficient evidence for lycopene having antioxidant effects in humans, particularly in skin, heart function, or vision protection from ultraviolet light. Although lycopene from tomatoes has been tested in humans for cardiovascular diseases and prostate cancer, no effect on any disease was found. The US Food and Drug Administration, in rejecting manufacturers' requests in 2005 to allow "qualified labeling" for lycopene and the reduction of various cancer risks, provided a conclusion that remains in effect : "...no studies provided information about whether lycopene intake may reduce the risk of any | chlorine bleach. The bleach oxidizes the lycopene, thus allowing the now-polarized product to dissolve. Diet Consumption by humans Absorption of lycopene requires that it be combined with bile salts and fat to form micelles. Intestinal absorption of lycopene is enhanced by the presence of fat and by cooking. Lycopene dietary supplements (in oil) may be more efficiently absorbed than lycopene from food. Lycopene is not an essential nutrient for humans, but is commonly found in the diet mainly from dishes prepared from tomatoes. The median and 99th percentile of dietary lycopene intake have been estimated to be 5.2 and 123 mg/d, respectively. Sources Fruits and vegetables that are high in lycopene include autumn olive, gac, tomatoes, watermelon, pink grapefruit, pink guava, papaya, seabuckthorn, wolfberry (goji, a berry relative of tomato), and rosehip. Ketchup is a common dietary source of lycopene. Although gac (Momordica cochinchinensis Spreng) has the highest content of lycopene of any known fruit or vegetable (multiple times more than tomatoes), tomatoes and tomato-based sauces, juices, and ketchup account for more than 85% of the dietary intake of lycopene for most people. The lycopene content of tomatoes depends on variety and increases as the fruit ripens. Unlike other fruits and vegetables, where nutritional content such as vitamin C is diminished upon cooking, processing of tomatoes increases the concentration of bioavailable lycopene. Lycopene in tomato paste is up to four times more bioavailable than in fresh tomatoes. Processed tomato products such as pasteurized tomato juice, soup, sauce, and ketchup contain a higher concentration of bioavailable lycopene compared to raw tomatoes. Cooking and crushing tomatoes (as in the canning process) and serving in oil-rich dishes (such as spaghetti sauce or pizza) greatly increases assimilation from the digestive tract into the bloodstream. Lycopene is fat-soluble, so the oil is said to help absorption. Gac has high lycopene content derived mainly from its seed coats. Cara cara navel, and other citrus fruit, such as pink grapefruit, also contain lycopene. Some foods that do not appear red also contain lycopene, e.g., asparagus, which contains about 30 μg of lycopene per 100-g serving (0.3 μg/g) and dried parsley and basil, which contain around 3.5–7.0 μg/g of lycopene. When lycopene is used as a food additive (E160d), it is usually obtained from tomatoes. Adverse effects Lycopene is non-toxic and commonly found in the diet, mainly from tomato products. There are cases of intolerance or allergic reaction to dietary lycopene, which may cause diarrhea, nausea, stomach pain or cramps, gas, and loss of appetite. Lycopene may increase the risk of bleeding when taken with anticoagulant drugs. Because lycopene may cause low blood pressure, interactions with drugs that affect blood pressure may occur. Lycopene may affect the immune system, the nervous system, sensitivity to sunlight, or drugs used for stomach ailments. Lycopenemia is an orange discoloration of the skin that is observed with high intakes of lycopene. The discoloration is expected to fade after discontinuing excessive lycopene intake. Research and potential health effects A 2017 review concluded that tomato products and lycopene supplementation had small positive effects on cardiovascular risk factors, such as reduced blood lipids and blood pressure. A 2010 review concluded that research has been insufficient to establish whether lycopene consumption affects human health. In basic and clinical research for its potential effects on cardiovascular diseases and prostate cancer, there is insufficient evidence for any effect. A 2020 review of randomized controlled trials found conflicting findings for lycopene improving cardiovascular risk factors. Regulatory status in Europe and the United States In a review of literature on lycopene and its potential benefit in the diet, the European Food Safety Authority concluded there was insufficient evidence for lycopene having antioxidant effects in humans, particularly in skin, heart function, or vision protection from ultraviolet light. Although lycopene from tomatoes has been tested in humans for cardiovascular diseases and prostate cancer, no effect on any disease was found. The US Food and Drug Administration, in rejecting manufacturers' requests in 2005 to allow "qualified labeling" for lycopene and the reduction of various cancer risks, provided a conclusion that remains in effect : "...no |
It was in the possession of the counts of Leiningen-Dagsburg-Landeck, whose arms, differenced by an escutcheon of the Imperial eagle, served as the arms of Landau until 1955 . The town was granted a charter in 1274 by King Rudolf I of Germany, who declared the town a Free Imperial Town in 1291; nevertheless Prince-Bishop Emich of Speyer, a major landowner in the district, seized the town in 1324. The town did not regain its ancient rights until 1511 from Maximilian I. An Augustinian monastery was founded in 1276. After the Peace of Westphalia in 1648, control of Landau was ceded to France, although with certain ill-defined reservations. Landau was later part of France from 1680 to 1815, during which it was one of the Décapole, the ten free cities of Alsace, and received its modern fortifications by Louis XIV's military architect Vauban in 1688–99, making the little town (its 1789 population was approximately 5,000) one of Europe's strongest citadels. In the War of the Spanish Succession it had four sieges. After the siege of 1702 lost by the French, an Imperial garrison was installed in Landau. In a subsequent siege from 13 October to 15 November 1703 the French regained the town, following their victory in the Battle of Speyerbach. A third siege, begun on 12 September 1704 by Louis, Margrave of Baden-Baden, ended on 23 November 1704 with a French defeat. During this siege King Joseph I arrived at Landau coming from Vienna in a newly developed convertible carriage. This carriage would become very popular and became named the landau in English, or Landauer in German. The French recaptured Landau once more in a final siege which lasted from 6 June to 20 August 1713 by Marshal General Villars. | Rudolf I of Germany, who declared the town a Free Imperial Town in 1291; nevertheless Prince-Bishop Emich of Speyer, a major landowner in the district, seized the town in 1324. The town did not regain its ancient rights until 1511 from Maximilian I. An Augustinian monastery was founded in 1276. After the Peace of Westphalia in 1648, control of Landau was ceded to France, although with certain ill-defined reservations. Landau was later part of France from 1680 to 1815, during which it was one of the Décapole, the ten free cities of Alsace, and received its modern fortifications by Louis XIV's military architect Vauban in 1688–99, making the little town (its 1789 population was approximately 5,000) one of Europe's strongest citadels. In the War of the Spanish Succession it had four sieges. After the siege of 1702 lost by the French, an Imperial garrison was installed in Landau. In a subsequent siege from 13 October to 15 November 1703 the French regained the town, following their victory in the Battle of Speyerbach. A third siege, begun on 12 September 1704 by Louis, Margrave of Baden-Baden, ended on 23 November 1704 with a French defeat. During this siege King Joseph I arrived at Landau coming from Vienna in a newly developed convertible carriage. This carriage would become very popular and became named the landau in English, or Landauer in German. The French recaptured Landau once more in a final siege which lasted from 6 June to 20 August 1713 by Marshal General Villars. Landau was part of Bas-Rhin department between 1789 and 1815. After Napoleon's Hundred Days following his escape from Elba, Landau, which had remained French, was granted to the Kingdom of Bavaria in 1815 and became the capital of one of the thirteen Bezirksämter (counties) of the Bavarian Rheinkreis, later renamed Pfalz. In 1840 famous political cartoonist Thomas Nast was born in Landau. Following World War II, Landau was an important barracks town for the French occupation. Main sights Landau's large main square (Rathausplatz) is dominated by the town hall (Rathaus) and the market hall (Altes Kaufhaus). In the 19th century, the former fortifications gave |
with despair and hope, and was everything one could have wished her to have been" in a performance "not to be missed and never to be forgotten", with her "grace and authority" that was "perhaps more than Garbo...born for Anna Christie:--Or more properly, Anna Christie was born for her." In 1980, Brian De Palma, who directed Carrie, wanted Liv Ullmann to play the role of Kate Miller in the erotic crime thriller Dressed to Kill and offered it to her, but she declined because of the violence. The role subsequently went to Angie Dickinson. In 1982 Ingmar Bergman wanted Ullmann to play Emelie Ekdahl in his last feature film, Fanny and Alexander, and wrote the role with this in mind. She declined it, feeling the role was too sad. She later stated in interviews that turning it down was one of the few things she really regretted. During 1984, she was chairperson of the jury at the 34th Berlin International Film Festival, and during 2002 chaired the jury of the Cannes Film Festival. She introduced her daughter, Linn Ullmann, to the audience with the words: "Here comes the woman whom Ingmar Bergman loves the most". Her daughter was there to receive the Prize of Honour on behalf of her father; she would return to serve the jury herself during 2011. She published two autobiographies, Changing (1977) and Choices (1984). Ullmann's first film as a director was Sofie (1992); her friend and former co-actor, Erland Josephson, starred on it. She later directed the Bergman-composed movie Faithless (2000). Faithless garnered nominations for both the Palme d'Or and Best Actress category at the Cannes Film Festival. In 2003, Ullmann reprised her role for Scenes from a Marriage in Saraband (2003), Bergman's final telemovie. Her previous screen role had been in the Swedish movie Zorn (1994). In 2004, Ullmann revealed that she had received an offer in November 2003 to play in three episodes of the popular American series, Sex and the City. She was amused by the offer, and said that it was one of the few programs she regularly watched, but she turned it down. Later that year, Steven Soderbergh wrote a role in the movie Ocean's 12 especially for her, but she also turned that down. During 2006, Ullmann announced that she had been forced to end her longtime wish of making a film based on A Doll's House. According to her statement, the Norwegian Film Fund was preventing her and writer Kjetil Bjørnstad from pursuing the | three episodes of the popular American series, Sex and the City. She was amused by the offer, and said that it was one of the few programs she regularly watched, but she turned it down. Later that year, Steven Soderbergh wrote a role in the movie Ocean's 12 especially for her, but she also turned that down. During 2006, Ullmann announced that she had been forced to end her longtime wish of making a film based on A Doll's House. According to her statement, the Norwegian Film Fund was preventing her and writer Kjetil Bjørnstad from pursuing the project. Australian actress Cate Blanchett and British actress Kate Winslet had been cast intended in the main roles of the movie. She later directed Blanchett in the play A Streetcar Named Desire, by Tennessee Williams, at the Sydney Theatre Company in Sydney, which was performed September through October 2009, and then continued from 29 October to 21 November 2009 at the John F. Kennedy Center for the Performing Arts in Washington, D.C., where it won a Helen Hayes Award for Outstanding Non-resident Production as well as actress and supporting performer for 2009. The play was also performed at the Brooklyn Academy of Music in Brooklyn, New York. Ullmann narrated the Canada–Norway co-produced animated short movie The Danish Poet (2006), which won the Academy Award for Animated Short Film at the 79th Academy Awards during 2007. In 2008, she was the head of the jury at the 30th Moscow International Film Festival. During 2012, she attended the International Indian Film Academy Awards in Singapore, where she was honored for her Outstanding Contributions to International Cinema and she also showed her movie on her relationship with Ingmar Bergman. In 2013, Ullmann directed a film adaptation of Miss Julie. The film, released in September 2014, stars Jessica Chastain, Colin Farrell, and Samantha Morton. It was widely praised by the Norwegian press. Personal life In addition to Norwegian, Ullmann speaks Swedish, English, and other European languages. She had a romantic relationship with Ingmar Bergman (1965–1970). Writer Linn Ullmann (b. 1966) is their daughter. Following an affair with the actor John Lithgow, Ullman married Boston real estate developer Donald Saunders in 1985, and they remained together after their 1995 divorce. She is a UNICEF Goodwill Ambassador, and has traveled widely for the organization. She is also co-founder and honorary chair of the Women's Refugee Commission. In 2005, King Harald V of Norway made Ullmann a Commander with Star of the Order of St. Olav. She received an honorary degree, a Doctorate of Philosophy, from the Norwegian University of Science and Technology (NTNU) in 2006. Filmography Film As actress As director Television Theatre Awards and recognition Honors 1972: Golden Globe for best Actress in a Motion Picture – Drama, (The Emigrants) 1984: Four Freedoms Laureate, Freedom from Want 2006: Ibsen Centennial Commemoration Award 2006: The Danish Poet won its director Torill Kove the Academy Award for Best Animated Short Film at the 79th Academy Awards. 2010: 2010 FIAF Award 2012: International Indian Film Academy Awards for Outstanding Contribution to International Cinema 2021: Academy Honorary Award See also List of film and television directors List of theatre directors in the 20th-21st centuries List of Norwegian actors List of Norwegian writers References Further reading Robert Emmet Long, ed. (2006). Liv Ullmann: Interviews. University Press of Mississippi. , 1-57806-824-X (paper). Collected interviews with Ullmann. David Outerbridge (1979). Without Makeup, Liv Ullmann: A Photo-Biography. New York City: William Morrow and Company. . Liv Ullmann (1977). Changing. New York City: Knopf. . Autobiography. Liv Ullmann (1984). Choices. New York: Knopf. . . Autobiography. External links The Guardian/NFT interview with Shane Danielson, 23 January 2001 Peter Bradshaw review of Trolösa , The Guardian, 9 February 2001 1938 births 20th-century Norwegian actresses 21st-century Norwegian actresses Actresses from Tokyo Best Drama Actress Golden Globe (film) winners David di Donatello winners European Film Awards winners (people) Norwegian women film directors Women television directors Living people Norwegian autobiographers Norwegian Christians Norwegian expatriates in the United States Norwegian film actresses Norwegian film directors Norwegian screenwriters Norwegian stage actresses Norwegian television actresses |
a pun on Sputnik, or referred to her as Curly. Her true pedigree is unknown, although it is generally accepted that she was part husky or other Nordic breed, and possibly part terrier. NASA refers to Laika as a "part-Samoyed terrier." A Russian magazine described her temperament as phlegmatic, saying that she did not quarrel with other dogs. The Soviet Union and United States had previously sent animals only on sub-orbital flights. Three dogs were trained for the Sputnik 2 flight: Albina, Mushka, and Laika. Soviet space-life scientists Vladimir Yazdovsky and Oleg Gazenko trained the dogs. To adapt the dogs to the confines of the tiny cabin of Sputnik 2, they were kept in progressively smaller cages for periods of up to 20 days. The extensive close confinement caused them to stop urinating or defecating, made them restless, and caused their general condition to deteriorate. Laxatives did not improve their condition, and the researchers found that only long periods of training proved effective. The dogs were placed in centrifuges that simulated the acceleration of a rocket launch and were placed in machines that simulated the noises of the spacecraft. This caused their pulses to double and their blood pressure to increase by 30–65 torr. The dogs were trained to eat a special high-nutrition gel that would be their food in space. Before the launch, one of the mission scientists took Laika home to play with his children. In a book chronicling the story of Soviet space medicine, Dr. Vladimir Yazdovsky wrote, "Laika was quiet and charming ... I wanted to do something nice for her: She had so little time left to live." Preflight preparations Yazdovsky made the final selection of dogs and their designated roles. Laika was to be the "flight dog"—a sacrifice to science on a one-way mission to space. Albina, who had already flown twice on a high-altitude test rocket, was to act as Laika's backup. The third dog, Mushka, was a "control dog"—she was to stay on the ground and be used to test instrumentation and life support. Before leaving for the Baikonur Cosmodrome, Yazdovsky and Gazenko conducted surgery on the dogs, routing the cables from the transmitters to the sensors that would measure breathing, pulse, and blood pressure. Because the existing airstrip at Turatam near the cosmodrome was small, the dogs and crew had to be first flown aboard a Tu-104 plane to Tashkent. From there, a smaller and lighter Il-14 plane took them to Turatam. Training of dogs continued upon arrival; one after another they were placed in the capsules to get familiar with the feeding system. According to a NASA document, Laika was placed in the capsule of the satellite on 31 October 1957—three days before the start of the mission. At that time of year, the temperatures at the launch site were extremely cold, and a hose connected to a heater was used to keep her container warm. Two assistants were assigned to keep a constant watch on Laika before launch. Just prior to liftoff on 3 November 1957, from Baikonur Cosmodrome, Laika's fur was sponged in a weak alcohol solution and carefully groomed, while iodine was painted onto the areas where sensors would be placed to monitor her bodily functions. One of the technicians preparing the capsule before final liftoff stated that "after placing Laika in the container and before closing the hatch, we kissed her nose and wished her bon voyage, knowing that she would not survive the flight." Voyage The exact time of the launch varies from source to source and is mentioned as 05:30:42 Moscow Time or 07:22 Moscow Time. At peak acceleration Laika's respiration increased to between three and four times the pre-launch rate. The sensors showed her heart rate was 103 beats/min before launch and increased to 240 beats/min during the early acceleration. After reaching orbit, Sputnik 2's nose cone was jettisoned successfully; however the "Block A" core did not separate as planned, preventing the thermal control system from operating correctly. Some of the thermal insulation tore loose, raising the cabin temperature to . After three hours of weightlessness, Laika's pulse rate had settled back to 102 beats/min, three times longer than it had taken during earlier ground tests, an indication of the stress she was under. The early telemetry indicated that Laika was agitated but eating her food. After approximately five to seven hours into the flight, no further signs of life were received from the spacecraft. The Soviet scientists had planned to euthanise Laika with a poisoned serving of food. For many years, the Soviet Union gave conflicting statements that she had died either from asphyxia, when the batteries failed, or that she had been euthanised. Many rumours circulated about the exact manner of her death. In 1999, several Russian sources reported that Laika had died when the cabin overheated on the fourth orbit. In October 2002, Dimitri Malashenkov, one of the scientists behind the Sputnik 2 mission, revealed that Laika had died by the fourth circuit of flight from overheating. According to a paper he presented to the World Space Congress in Houston, Texas, "It turned out that it was practically impossible to create a reliable temperature control system in such limited time constraints." Over five months later, after 2,570 orbits, Sputnik 2—including Laika's remains—disintegrated during re-entry on 14 April 1958. Ethics of animal testing Due to the overshadowing issue of the Soviet–U.S. Space Race, the ethical issues raised by this experiment went largely unaddressed for some time. As newspaper clippings from 1957 show, the press was initially focused on reporting the political perspective, while the health and retrieval—or lack thereof—of Laika only became an issue later. Sputnik 2 was not designed to be retrievable, and it had always been accepted that Laika would die. The mission sparked a debate across the globe on the mistreatment of animals and animal testing in general to advance science. In the United Kingdom, the National Canine Defence League called on all dog owners to observe a minute's silence, while the Royal Society for the Prevention of Cruelty to Animals (RSPCA) received protests even before Radio Moscow had finished announcing the launch. Animal rights groups at the time called on members of the public to protest at Soviet embassies. Others demonstrated outside the United Nations in New York. Laboratory researchers in the U.S. offered some support for the Soviets, at least before the news of Laika's death. In the Soviet Union, there was less controversy. Neither the media, books in the following years, nor the public openly questioned the decision to send a dog into space. In 1998, after the collapse of the Soviet regime, Oleg Gazenko, one of the scientists responsible for sending Laika into space, expressed regret for allowing her to die: In other Warsaw Pact countries, open criticism of the Soviet space program was difficult because of political censorship, but there were notable cases of criticism in Polish scientific circles. A Polish scientific periodical, "Kto, Kiedy, Dlaczego" ("Who, When, Why"), published in 1958, discussed the mission of Sputnik 2. In the periodical's section dedicated to astronautics, Krzysztof Boruń described the Sputnik 2 mission as "regrettable" and criticised not bringing Laika back to Earth alive as "undoubtedly a great loss for science". Legacy Laika is memorialised in the form of a statue and plaque at Star City, Russia, the Russian Cosmonaut training facility. Created in 1997, Laika is positioned behind the cosmonauts with her ears erect. The Monument to the Conquerors | caused by a failure of the central R-7 sustainer to separate from the payload. The true cause and time of her death were not made public until 2002; instead, it was widely reported that she died when her oxygen ran out on day six or, as the Soviet government initially claimed, she was euthanised prior to oxygen depletion. On 11 April 2008, Russian officials unveiled a monument to Laika. A small monument in her honour was built near the military research facility in Moscow that prepared Laika's flight to space. It portrayed a dog standing on top of a rocket. She also appears on the Monument to the Conquerors of Space in Moscow. Sputnik 2 After the success of Sputnik 1 in October 1957, Nikita Khrushchev, the Soviet leader, wanted a spacecraft launched on 7 November 1957, the 40th anniversary of the October Revolution. Construction had already started on a more sophisticated satellite, but it would not be ready until December; this satellite would later become Sputnik 3. Meeting the November deadline meant building a new craft. Khrushchev specifically wanted his engineers to deliver a "space spectacular", a mission that would repeat the triumph of Sputnik 1, stunning the world with Soviet prowess. Planners settled on an orbital flight with a dog. Soviet rocket engineers had long intended a canine orbit before attempting human spaceflight; since 1951, they had lofted twelve dogs into sub-orbital space on ballistic flights, working gradually toward an orbital mission set for some time in 1958. To satisfy Khrushchev's demands, they expedited the orbital canine flight for the November launch. According to Russian sources, the official decision to launch Sputnik 2 was made on 10 or 12 October, leaving less than four weeks to design and build the spacecraft. Sputnik 2, therefore, was something of a rush job, with most elements of the spacecraft being constructed from rough sketches. Aside from the primary mission of sending a living passenger into space, Sputnik 2 also contained instrumentation for measuring solar irradiance and cosmic rays. The craft was equipped with a life-support system consisting of an oxygen generator and devices to avoid oxygen poisoning and to absorb carbon dioxide. A fan, designed to activate whenever the cabin temperature exceeded , was added to keep the dog cool. Enough food (in a gelatinous form) was provided for a seven-day flight, and the dog was fitted with a bag to collect waste. A harness was designed to be fitted to the dog, and there were chains to restrict her movements to standing, sitting, or lying down; there was no room to turn around in the cabin. An electrocardiogram monitored heart rate and further instrumentation tracked respiration rate, maximum arterial pressure, and the dog's movements. Training Laika was found as a stray wandering the streets of Moscow. Soviet scientists chose to use Moscow strays since they assumed that such animals had already learned to endure conditions of extreme cold and hunger. This specimen was a mongrel female, approximately three years old. Another account reported that she weighed about . Soviet personnel gave her several names and nicknames, among them Kudryavka (Russian for Little Curly), Zhuchka (Little Bug), and Limonchik (Little Lemon). Laika, the Russian name for several breeds of dogs similar to the husky, was the name popularised around the world. Its literal translation would be "Barker", from the Russian verb "layat" (лаять), "to bark". According to some accounts, the technicians actually renamed her from Kudryavka to Laika due to her loud barking. The American press dubbed her Muttnik (mutt + suffix -nik) as a pun on Sputnik, or referred to her as Curly. Her true pedigree is unknown, although it is generally accepted that she was part husky or other Nordic breed, and possibly part terrier. NASA refers to Laika as a "part-Samoyed terrier." A Russian magazine described her temperament as phlegmatic, saying that she did not quarrel with other dogs. The Soviet Union and United States had previously sent animals only on sub-orbital flights. Three dogs were trained for the Sputnik 2 flight: Albina, Mushka, and Laika. Soviet space-life scientists Vladimir Yazdovsky and Oleg Gazenko trained the dogs. To adapt the dogs to the confines of the tiny cabin of Sputnik 2, they were kept in progressively smaller cages for periods of up to 20 days. The extensive close confinement caused them to stop urinating or defecating, made them restless, and caused their general condition to deteriorate. Laxatives did not improve their condition, and the researchers found that only long periods of training proved effective. The dogs were placed in centrifuges that simulated the acceleration of a rocket launch and were placed in machines that simulated the noises of the spacecraft. This caused their pulses to double and their blood pressure to increase by 30–65 torr. The dogs were trained to eat a special high-nutrition gel that would be their food in space. Before the launch, one of the mission scientists took Laika home to play with his children. In a book chronicling the story of Soviet space medicine, Dr. Vladimir Yazdovsky wrote, "Laika was quiet and charming ... I wanted to do something nice for her: She had so little time left to live." Preflight preparations Yazdovsky made the final selection of dogs and their designated roles. Laika was to be the "flight dog"—a sacrifice to science on a one-way mission to space. Albina, who had already flown twice on a high-altitude test rocket, was to act as Laika's backup. The third dog, Mushka, was a "control dog"—she was to stay on the ground and be used to test instrumentation and life support. Before leaving for the Baikonur Cosmodrome, Yazdovsky and Gazenko conducted surgery on the dogs, routing the cables from the transmitters to the sensors that would measure breathing, pulse, and blood pressure. Because the existing airstrip at Turatam near the cosmodrome was small, the dogs and crew had to be first flown aboard a Tu-104 plane to Tashkent. From there, a smaller and lighter Il-14 plane took them to Turatam. Training of dogs continued upon arrival; one after another they were placed in the capsules to get familiar with the feeding system. According to a NASA document, Laika was placed in the capsule of the satellite on 31 October 1957—three days before the start of the mission. At that time of year, the temperatures at the launch site were extremely cold, and a hose connected to a heater was used to keep her container warm. Two assistants were assigned to keep a constant watch on Laika before launch. Just prior to liftoff on 3 November 1957, from Baikonur Cosmodrome, Laika's fur was sponged in a weak alcohol solution and carefully groomed, while iodine was painted onto the areas where sensors would be placed to monitor her bodily functions. One of the technicians preparing the capsule before final liftoff stated that "after placing Laika in the container and before closing the hatch, we kissed her nose and wished her bon voyage, knowing that she would not survive the flight." Voyage The exact time of the launch varies from source to source and is mentioned as 05:30:42 Moscow Time or 07:22 Moscow Time. At peak acceleration Laika's respiration increased to between three and four times the pre-launch rate. The sensors showed her heart rate was 103 beats/min before |
music evokes clichéd images of "revolutionary heroes" who are "male chauvinists too inexperienced to know better". Released on March 3, 1986, the album had a 72-week run on the Billboard 200 album charts and earned the band its first gold certification. The album debuted on March 29, 1986, at number 128 and peaked at number 29 on the Billboard 200 chart. Billboard reported that 300,000 copies were sold in its first three weeks. More than 500,000 copies were sold in its first year, even with virtually no radio airplay and no music videos. In 2003, Master of Puppets was certified 6× platinum by the Recording Industry Association of America (RIAA), with six million copies shipped in the United States. Between the beginning of the Nielsen SoundScan era in 1991 and 2009, 4,578,000 copies were sold. The album was less successful on an international level, despite entering the top 5 on the Finnish and the top 40 on the German and Swiss album charts in its inaugural year. In 2004, it peaked within the top 15 in Sweden. In 2008, the album reached the top 40 on the Australian and Norwegian album charts. It received 6× platinum certification from Music Canada and a gold certification from the British Phonographic Industry (BPI) for shipments of 600,000 and 100,000 copies, respectively. Accolades and legacy Master of Puppets has appeared in several publications' best album lists. It was ranked number 167 on the list of Rolling Stone's 500 Greatest Albums of All Time, maintaining the rating in a 2012 revised list, and upgrading to number 97 in a 2020 revised list. The magazine would also later rank it second on its 2017 list of "100 Greatest Metal Albums of All Time", behind Black Sabbath's Paranoid. Time included the album in its list of the 100 best albums of all time. According to the magazine's Josh Tyrangiel, Master of Puppets reinforced the velocity of playing in heavy metal and diminished some of its clichés. Slant Magazine placed the album at number 90 on its list of the best albums of the 1980s, saying Master of Puppets is Metallica's best and most sincere recording. The album is featured in Robert Dimery's book 1001 Albums You Must Hear Before You Die. IGN named Master of Puppets the best heavy metal album of all time. The website stated it was Metallica's best because it "built upon and perfected everything they had experimented with prior" and that "all the pieces come together in glorious cohesion". Music journalist Martin Popoff also ranked it the best heavy metal album. The album was voted the fourth greatest guitar album of all time by Guitar World in 2006, and the title track ranked number 61 on the magazine's list of the 100 greatest guitar solos. Total Guitar ranked the main riff of the title track at number 7 among the top 20 guitar riffs. The April 2006 edition of Kerrang! was dedicated to the album and gave away to readers the cover album Master of Puppets: Remastered. Master of Puppets became thrash metal's first platinum album and by the early 1990s thrash metal successfully challenged and redefined the mainstream of heavy metal. Metallica and a few other bands headlined arena concerts and appeared regularly on MTV, although radio play remained incommensurate with their popularity. Master of Puppets is widely accepted as the genre's most accomplished album, and paved the way for subsequent development. The album, in the words of writer Christopher Knowles, "ripped Metallica away from the underground and put them atop the metal mountain". David Hayter from Guitar Planet recognized the album as one of the most influential records ever made and a benchmark by which other metal albums should be judged. MTV's Kyle Anderson had similar thoughts, saying that 25 years after its release the album remained a "stone cold classic". Carlos Ramirez from Noisecreep believes that Master of Puppets stands as one of the most representative albums of its genre. 1986 is seen as a pinnacle year for thrash metal in which the genre broke out of the underground due to albums such as Megadeth's Peace Sells... but Who's Buying? and Slayer's Reign in Blood. Anthrax released Among the Living in 1987, and by the end of the year these bands, alongside Metallica, were being called the "Big Four" of thrash metal. Master of Puppets frequently tops critic and fan polls of favorite thrash metal albums—the most frequent rival is Slayer's Reign in Blood, also released in 1986 and also considered that band's peak. The rivalry partially stemmed from a contrast in approaches on the two albums, between the sophistication of Master of Puppets and the velocity of Reign in Blood. Histories of the band tend to position Ride the Lightning, Master of Puppets, and ...And Justice for All as a trilogy over the course of which the band's music progressively matured and became more sophisticated. In 2015, the album was deemed "culturally, historically, or aesthetically significant" by the Library of Congress and was selected for preservation in the National Recording Registry. Kerrang! released a tribute album titled Master of Puppets: Remastered with the April 8, 2006, edition of the magazine to celebrate the 20th anniversary of Master of Puppets. The album featured cover versions of Metallica songs by Machine Head, Bullet for My Valentine, Chimaira, Mastodon, Mendeed, and Trivium—all of which are influenced by Metallica. Tour and Burton's death Metallica opted for extensive touring instead of releasing a single or video to promote the album. The Damage, Inc. Tour began in March 1986, and the band spent March to August touring as the opening act for Ozzy Osbourne in the United States, the first tour Metallica played to arena-sized audiences. During sound checks, the group played riffs from Osbourne's previous band Black Sabbath, which Osbourne perceived as mockery. Ulrich, however, stated that Metallica was honored to play with Osbourne, who treated the band well on the tour. Metallica was noted by the media for their excessive drinking habit while touring and earned the nickname "Alcoholica". The band members occasionally even wore satirical T-shirts reading "Alcoholica/Drank 'Em All". The band usually played a 45-minute set often followed by an encore. According to Ulrich, the audiences in bigger cities were already familiar with Metallica's music, unlike in the smaller towns they've visited. "In the B-markets, people really don't know what we're all about. But after 45 or 50 minutes we can tell we've won them over. And fans who come to hear Ozzy go home liking Metallica." Metallica won over Osbourne's fans and slowly began to establish a mainstream following. Hetfield broke his wrist in a mid-tour skateboarding accident, and guitar technician John Marshall played rhythm guitar on several dates. The European leg of the tour commenced in September, with Anthrax as the supporting band. The morning after a performance on September 26 in Stockholm, the band's bus rolled off the road, and Burton was thrown through a window and killed instantly. The driver claimed he hit a patch of black ice, but others believed he was either drunk or fell asleep at the wheel. The driver was charged with manslaughter but was not convicted. The band returned to San Francisco and hired Flotsam and Jetsam bassist Jason Newsted to replace Burton. Many of the songs that appeared on the band's next album, ...And Justice for All, were composed during Burton's career with the band. Later live performances All of the songs have been performed live, and some became permanent setlist features. Four tracks were featured on the nine-song set list for the album's promotional tour: "Battery" as opener, "Master of Puppets", "Welcome Home (Sanitarium)", and "Damage, Inc." The title track, which was issued as a single in France, became a live staple and the most-played Metallica song. Loudwires Chad Childers characterized the band's performance as "furious" and the song as the set's highlight. Rolling Stone described the live performance as "a classic in all its eight-minute glory". While filming its 3D movie Metallica: Through the Never (2013) at the Rogers Arena in Vancouver, crosses were rising from the stage during the song, reminiscent of the album's cover art. "Welcome Home (Sanitarium)" is the second-most performed song from the album. The live performance is often accompanied by lasers, pyrotechnical effects and film screens. "Battery" is usually played at the beginning of the setlist or during the encore, accompanied by lasers and flame plumes. "Disposable Heroes" is featured in the video album Orgullo, Pasión, y Gloria: Tres Noches en la Ciudad de México (2009) filmed in Mexico City, in which the song was played on the second of three nights at the Foro Sol. "Orion" is the least-performed song from the album. Its first live performance was during the Escape from the Studio '06 tour, when the band performed the album in its entirety, honoring the 20th anniversary of its release. The band performed the album in the middle of the set. "Battery", "Welcome Home (Sanitarium)", "Damage, Inc." and the full-length "Master of Puppets" were revived for the band's concerts in 1997 and 1998, after having been retired for a number of years. Track listing All lyrics written by James Hetfield. | David Bowie song "Ziggy Stardust". "Orion" is a multipart instrumental highlighting Burton's bass playing. It opens with a fade-in bass section, heavily processed to resemble an orchestra. It continues with mid-tempo riffing, followed by a bass riff at half-tempo. The tempo accelerates during the latter part, and ends with music fading out. Burton arranged the middle section, which features its moody bass line and multipart guitar harmonies. "Damage, Inc." rants about senseless violence and reprisal at an unspecified target. It starts with a series of reversed bass chords based on the chorale prelude of Bach's "Come, Sweet Death". The song then jumps into a rapid rhythm with a pedal-point riff in E that Hammett says was influenced by Deep Purple. Reception Master of Puppets was hailed as a masterpiece by critics outside of the thrash metal audience and cited by some as the genre's greatest album. In a contemporary review, Tim Holmes of Rolling Stone asserted that the band had redefined heavy metal with the technical skill and subtlety showcased on the album, which he described as "the sound of global paranoia". Kerrang! wrote that Master of Puppets "finally put Metallica into the big leagues where they belong". Editor Tom King said Metallica was at an "incredible song-writing peak" during the recording sessions, partially because Burton contributed to the songwriting. By contrast, Spin magazine's Judge I-Rankin was disappointed with the album and said, although the production is exceptional and Metallica's experimentation is commendable, it eschews the less "intellectual" approach of Kill 'Em All for a MDC-inspired direction that is inconsistent. In a retrospective review, AllMusic's Steve Huey viewed Master of Puppets as Metallica's best album and remarked that, although it was not as unexpected as Ride the Lightning, it is a more musically and thematically consistent album. Greg Kot of the Chicago Tribune said the songs were the band's most intense at that point, and veer toward "the progressive tendency of Rush." Adrien Begrand of PopMatters praised the production as "a metal version of Phil Spector's Wall of Sound" and believed none of Metallica's subsequent albums could match its passionate and intense musical quality. BBC Music's Eamonn Stack called the album "hard, fast, rock with substance" and likened the songs to stories of "biblical proportions". Canadian journalist Martin Popoff compared the album to Ride the Lightning and found Master of Puppets not a remake, though similar in "awesome power and effect". Robert Christgau was more critical. Writing in Christgau's Record Guide: The '80s (1990), he said the band's energy and political motivations are respectable, but the music evokes clichéd images of "revolutionary heroes" who are "male chauvinists too inexperienced to know better". Released on March 3, 1986, the album had a 72-week run on the Billboard 200 album charts and earned the band its first gold certification. The album debuted on March 29, 1986, at number 128 and peaked at number 29 on the Billboard 200 chart. Billboard reported that 300,000 copies were sold in its first three weeks. More than 500,000 copies were sold in its first year, even with virtually no radio airplay and no music videos. In 2003, Master of Puppets was certified 6× platinum by the Recording Industry Association of America (RIAA), with six million copies shipped in the United States. Between the beginning of the Nielsen SoundScan era in 1991 and 2009, 4,578,000 copies were sold. The album was less successful on an international level, despite entering the top 5 on the Finnish and the top 40 on the German and Swiss album charts in its inaugural year. In 2004, it peaked within the top 15 in Sweden. In 2008, the album reached the top 40 on the Australian and Norwegian album charts. It received 6× platinum certification from Music Canada and a gold certification from the British Phonographic Industry (BPI) for shipments of 600,000 and 100,000 copies, respectively. Accolades and legacy Master of Puppets has appeared in several publications' best album lists. It was ranked number 167 on the list of Rolling Stone's 500 Greatest Albums of All Time, maintaining the rating in a 2012 revised list, and upgrading to number 97 in a 2020 revised list. The magazine would also later rank it second on its 2017 list of "100 Greatest Metal Albums of All Time", behind Black Sabbath's Paranoid. Time included the album in its list of the 100 best albums of all time. According to the magazine's Josh Tyrangiel, Master of Puppets reinforced the velocity of playing in heavy metal and diminished some of its clichés. Slant Magazine placed the album at number 90 on its list of the best albums of the 1980s, saying Master of Puppets is Metallica's best and most sincere recording. The album is featured in Robert Dimery's book 1001 Albums You Must Hear Before You Die. IGN named Master of Puppets the best heavy metal album of all time. The website stated it was Metallica's best because it "built upon and perfected everything they had experimented with prior" and that "all the pieces come together in glorious cohesion". Music journalist Martin Popoff also ranked it the best heavy metal album. The album was voted the fourth greatest guitar album of all time by Guitar World in 2006, and the title track ranked number 61 on the magazine's list of the 100 greatest guitar solos. Total Guitar ranked the main riff of the title track at number 7 among the top 20 guitar riffs. The April 2006 edition of Kerrang! was dedicated to the album and gave away to readers the cover album Master of Puppets: Remastered. Master of Puppets became thrash metal's first platinum album and by the early 1990s thrash metal successfully challenged and redefined the mainstream of heavy metal. Metallica and a few other bands headlined arena concerts and appeared regularly on MTV, although radio play remained incommensurate with their popularity. Master of Puppets is widely accepted as the genre's most accomplished album, and paved the way for subsequent development. The album, in the words of writer Christopher Knowles, "ripped Metallica away from the underground and put them atop the metal mountain". David Hayter from Guitar Planet recognized the album as one of the most influential records ever made and a benchmark by which other metal albums should be judged. MTV's Kyle Anderson had similar thoughts, saying that 25 years after its release the album remained a "stone cold classic". Carlos Ramirez from Noisecreep believes that Master of Puppets stands as one of the most representative albums of its genre. 1986 is seen as a pinnacle year for thrash metal in which the genre broke out of the underground due to albums such as Megadeth's Peace Sells... but Who's Buying? and Slayer's Reign in Blood. Anthrax released Among the Living in 1987, and by the end of the year these bands, alongside Metallica, were being called the "Big Four" of thrash metal. Master of Puppets frequently tops critic and fan polls of favorite thrash metal albums—the most frequent rival is Slayer's Reign in Blood, also released in 1986 and also considered that band's peak. The rivalry partially stemmed from a contrast in approaches on the two albums, between the sophistication of Master of Puppets and the velocity of Reign in Blood. Histories of the band tend to position Ride the Lightning, Master of Puppets, and ...And Justice for All as a trilogy over the course of which the band's music progressively matured and became more sophisticated. In 2015, the album was deemed "culturally, historically, or aesthetically significant" by the Library of Congress and was selected for preservation in the National Recording Registry. Kerrang! released a tribute album titled Master of Puppets: Remastered with the April 8, 2006, edition of the magazine to celebrate the 20th anniversary of Master of Puppets. The album featured cover versions of Metallica songs by Machine Head, Bullet for My Valentine, Chimaira, Mastodon, Mendeed, and Trivium—all of which are influenced by Metallica. |
summarized the album as "ultimate thrash, destruction and total blur" that reminded him of the speed and power of Kill 'Em All. Music journalist Martin Popoff observed that Ride the Lightning offered "sophistication and brutality in equal measure" and was seen as something new at the time of its release. Discussing the album's lyrical content, philosopher William Irwin wrote: "After Kill 'Em All, the rebellion and aggression became much more focused as the enemy became more clearly defined. Metallica was deeply concerned about various domains in which the common man was wrongfully yet ingeniously deceived. More precisely, they were highly critical of those in power". The major-key acoustic introduction to "Fight Fire with Fire" displays Metallica's evolution towards a more harmonically complex style of songwriting. The fastest Metallica song in terms of picking speed, it is driven by nimbly tremolo-picked riffs in the verses and chorus. The extended solo at the end dissolves in a sound effect of a vast nuclear explosion. The main riff was taped during the Kill 'Em All Tour and the acoustic intro was something Burton was playing on acoustic guitar at the time. The song discouraged the "eye for an eye" approach, and its lyrical themes focused on nuclear warfare and Armageddon. "Ride the Lightning" is Metallica's first song to have emphasized the misery of the criminal justice system. The lyrics are in the perspective of a death row inmate anticipating execution by the electric chair. The song, one of the two album tracks that credits Mustaine, begins in a mid-tempo which gradually accelerates as the song progress. One of the riffs, originally composed by Mustaine, was simplified. It features an instrumental middle section highlighted by Hammett's soloing. According to Hetfield, the song is not a criticism of capital punishment, but a tale of a man sentenced to death for a crime he did not commit, as in the opening lyrics: "Guilty as charged/But damn it/It ain't right". "For Whom the Bell Tolls" begins with a bell tolling, followed by a marching riff and high-register bass melody. The chromatic introduction, which Burton wrote before he joined Metallica, is often mistaken for an electric guitar but is actually Burton's bass guitar augmented with distortion and a wah-wah pedal. The lyrics were inspired by Ernest Hemingway's 1940 novel of the same name, which explores the horror and dishonor of modern warfare. "For Whom the Bell Tolls" was released as a promotional single in two versions, an edit on side A and the album version on side B. "Fade to Black" is a power ballad with lyrics about suicide. Hetfield wrote the words because he felt powerless after the band's equipment was stolen before the January 1984 show in Boston. Musically, the song begins with an acoustic guitar introduction overlaid with electric soloing. The song becomes progressively heavier and faster, ending with multi-layered guitar solos. The ballad's arpeggiated chords and reserved singing was incongruous for thrash metal bands at the time and disappointed some of Metallica's fans. The song's structure foreshadows later Metallica ballads, "Welcome Home (Sanitarium)", "One", and "The Day That Never Comes". "Fade to Black" was released as a promotional single in 1984, in phosphorescent green. "Trapped Under Ice" is about a person who wakes from a cryonic state. Realizing there is nowhere to go, and no-one will come to the rescue, the person helplessly awaits impending doom. The song is built on a fast-picked galloping riff, reminiscent of the album's opener. It was inspired by a track Hammett's former band Exodus had demoed called "Impaler", which was later released on that band's 2004 album Tempo of the Damned. "Escape" was originally titled "The Hammer" and was intended to be released as a single due to its lighter riffs and conventional song structure. The intro features a counterpoint bass melody and a chugging guitar riff that resolves into a standard down-picked riff. "Escape" is Hetfield's most disliked Metallica song, due to it being the result of the record company forcing Metallica to write something more radio friendly. Book authors Mick Wall and Malcolm Dome said the song was influenced by the album-oriented rock of 1970s bands such as Journey and Foreigner, but fans perceived it as an attempt for airplay on rock radio. Metallica has so far performed "Escape" live only once, at the 2012 Orion Music + More festival, while performing Ride the Lightning in its entirety. "Creeping Death" describes the Plague of the Death of the Firstborn (Exodus 12:29). The lyrics deal with the ten plagues visited on Ancient Egypt; four of them are mentioned throughout the song, as well as the Passover. The title was inspired by a scene from The Ten Commandments while the band was watching the movie at Burton's house. The bridge, with its chant "Die, by my hand!", was originally written by Hammett for the song "Die by His Hand" while he was playing in Exodus, who recorded it as a demo but did not feature it on a studio album. Journalist Joel McIver called the song a "moshpit anthem" due to its epic lyrical themes and dramatic atmosphere. "Creeping Death" was released as a single with a B-side titled Garage Days Revisited made up of covers of Diamond Head's "Am I Evil?" and Blitzkrieg's "Blitzkrieg". "The Call of Ktulu", tentatively titled "When Hell Freezes Over", was inspired by H. P. Lovecraft's book The Shadow over Innsmouth, which was introduced to the rest of the band by Burton. The title was taken from one of Lovecraft's key stories featuring Cthulhu, The Call of Cthulhu, although the original name was modified to "Ktulu" for easier pronunciation. The track begins with a D minor chord progression in the intro, written by Mustaine (Mustaine later re-used the chord structure on Megadeth's track "Hangar 18") followed by a two-minute bass solo over a rhythmic riff pattern. Conductor Michael Kamen rearranged the piece for Metallica's 1999 S&M project and won a Grammy Award for Best Rock Instrumental Performance in 2001. Reception and legacy Ride the Lightning received widespread acclaim from music critics. According to Q magazine, the album confirmed Metallica's status as the leading heavy metal band of the modern era. The magazine credited the group for redefining the norms of thrash metal with "Fade to Black", the genre's first power ballad. British rock magazine Kerrang! stated that the album's maturity and musical intelligence helped Metallica expand heavy metal's boundaries. Greg Kot of the Chicago Tribune described Ride the Lightning as a more refined extension of the group's debut. In a retrospective review, Sputnikmusic's Channing Freeman named it as one of the few albums that can be charming and powerful at the same time. He praised Hetfield's vocal performance and concluded that Metallica was "firing on all cylinders". AllMusic's Steve Huey saw the album as a more ambitious and remarkable effort than Kill 'Em All. He called Ride the Lightning an "all-time metal classic" because of the band's rich musical imagination and lyrics that avoided heavy metal cliches. The Rolling Stone Album Guide viewed the album as a great step forward for the band and as an album that established the concept for Metallica's following two records. Colin Larkin, writing in the Encyclopedia of Popular Music, singled out "For Whom the Bell Tolls" as an example of Metallica's growing music potential. Popoff regards Ride the Lightning as an album where "extreme metal became art". "This literally was the first album since (Judas Priest's 1976) Sad Wings of Destiny where the rulebook has changed. This was a new kind of heaviness; the soft, billowy but explosive production was amazing, the speed was superhuman", stated Popoff. Reviewing the 2016 reissue, Jason Anderson of Uncut considers Ride the Lightning the second best Metallica album which set the pace for metal in the years to come. Megaforce initially pressed 75,000 copies of the album for the US market, while Music for Nations serviced the European market. By late 1984, 85,000 copies of Ride the Lightning had been sold in Europe, resulting in Metallica's first cover story for Kerrang! in its December issue. After signing Metallica, Elektra released the single "Creeping Death" in a sleeve depicting a bridge and a skull painted grey and green. The album peaked at number 100 on the Billboard 200 with no radio exposure. In 1984, the French record label Bernett Records misprinted the color of the album cover in green, rather than blue, and 400 copies with the green cover were produced. Ride the Lightning went gold by November 1987 and in 2012 was certified 6× platinum by the Recording Industry Association of America (RIAA) for six million copies shipped in the US. The album, along with Kill 'Em All, was reissued in 2016 as a boxed set including demos and live recordings. Many rock publications have ranked Ride the Lightning on their best album lists. The album placed fifth on IGN Music's "Top 25 Metal Albums" list. Spin listed it as a thrash metal essential, declaring it "the thrashiest thrash ever". According to Guitar World, Ride the Lightning "didn't just change the band's trajectory—it reset the course of metal itself". Corey Deiterman of the Houston Press considers Ride the Lightning the most influential Metallica album, saying it had a lasting impact on genres such as crossover thrash and hardcore punk. In 2017, it was ranked 11th on Rolling Stone list of "100 Greatest Metal Albums of All Time". In a 1991 interview, Jason Newsted stated that Ride the Lightning was next to Metallica, "the best album ever". Touring After recording was completed, Music for Nations founder Martin Hooker wanted to arrange a triple-bill UK tour in March / April 1984 with Exciter, Metallica, and the Rods. The Hell on Earth Tour never materialized because of poor ticket sales. To promote Ride the Lightning, Metallica commenced the Bang That Head That Doesn't Bang European tour on November 16, in Rouen, France, with British new wave band Tank as support. The tour continued with dates in Belgium, Italy, Germany, and the Nordic countries to an average crowd of 1,300. After a Christmas break, the group embarked on a 50-date North American tour, first as a co-headlining act with W.A.S.P. and then as headliners with Armored Saint supporting. At a gig in Portland, Oregon, Metallica covered "The Money Will Roll Right In" by Fang, with Armored Saint onstage. The American leg ended in May 1985, and the band spent the following two months working on the next studio album, Master of Puppets, whose recording sessions were scheduled to begin in September. Metallica performed at the Monsters of Rock festival held at Castle Donington in England on August 17 in front of 70,000 fans. The band was placed between Ratt and Bon Jovi, two glam metal groups whose sound and appearance were much unlike Metallica's. At | studio album. Journalist Joel McIver called the song a "moshpit anthem" due to its epic lyrical themes and dramatic atmosphere. "Creeping Death" was released as a single with a B-side titled Garage Days Revisited made up of covers of Diamond Head's "Am I Evil?" and Blitzkrieg's "Blitzkrieg". "The Call of Ktulu", tentatively titled "When Hell Freezes Over", was inspired by H. P. Lovecraft's book The Shadow over Innsmouth, which was introduced to the rest of the band by Burton. The title was taken from one of Lovecraft's key stories featuring Cthulhu, The Call of Cthulhu, although the original name was modified to "Ktulu" for easier pronunciation. The track begins with a D minor chord progression in the intro, written by Mustaine (Mustaine later re-used the chord structure on Megadeth's track "Hangar 18") followed by a two-minute bass solo over a rhythmic riff pattern. Conductor Michael Kamen rearranged the piece for Metallica's 1999 S&M project and won a Grammy Award for Best Rock Instrumental Performance in 2001. Reception and legacy Ride the Lightning received widespread acclaim from music critics. According to Q magazine, the album confirmed Metallica's status as the leading heavy metal band of the modern era. The magazine credited the group for redefining the norms of thrash metal with "Fade to Black", the genre's first power ballad. British rock magazine Kerrang! stated that the album's maturity and musical intelligence helped Metallica expand heavy metal's boundaries. Greg Kot of the Chicago Tribune described Ride the Lightning as a more refined extension of the group's debut. In a retrospective review, Sputnikmusic's Channing Freeman named it as one of the few albums that can be charming and powerful at the same time. He praised Hetfield's vocal performance and concluded that Metallica was "firing on all cylinders". AllMusic's Steve Huey saw the album as a more ambitious and remarkable effort than Kill 'Em All. He called Ride the Lightning an "all-time metal classic" because of the band's rich musical imagination and lyrics that avoided heavy metal cliches. The Rolling Stone Album Guide viewed the album as a great step forward for the band and as an album that established the concept for Metallica's following two records. Colin Larkin, writing in the Encyclopedia of Popular Music, singled out "For Whom the Bell Tolls" as an example of Metallica's growing music potential. Popoff regards Ride the Lightning as an album where "extreme metal became art". "This literally was the first album since (Judas Priest's 1976) Sad Wings of Destiny where the rulebook has changed. This was a new kind of heaviness; the soft, billowy but explosive production was amazing, the speed was superhuman", stated Popoff. Reviewing the 2016 reissue, Jason Anderson of Uncut considers Ride the Lightning the second best Metallica album which set the pace for metal in the years to come. Megaforce initially pressed 75,000 copies of the album for the US market, while Music for Nations serviced the European market. By late 1984, 85,000 copies of Ride the Lightning had been sold in Europe, resulting in Metallica's first cover story for Kerrang! in its December issue. After signing Metallica, Elektra released the single "Creeping Death" in a sleeve depicting a bridge and a skull painted grey and green. The album peaked at number 100 on the Billboard 200 with no radio exposure. In 1984, the French record label Bernett Records misprinted the color of the album cover in green, rather than blue, and 400 copies with the green cover were produced. Ride the Lightning went gold by November 1987 and in 2012 was certified 6× platinum by the Recording Industry Association of America (RIAA) for six million copies shipped in the US. The album, along with Kill 'Em All, was reissued in 2016 as a boxed set including demos and live recordings. Many rock publications have ranked Ride the Lightning on their best album lists. The album placed fifth on IGN Music's "Top 25 Metal Albums" list. Spin listed it as a thrash metal essential, declaring it "the thrashiest thrash ever". According to Guitar World, Ride the Lightning "didn't just change the band's trajectory—it reset the course of metal itself". Corey Deiterman of the Houston Press considers Ride the Lightning the most influential Metallica album, saying it had a lasting impact on genres such as crossover thrash and hardcore punk. In 2017, it was ranked 11th on Rolling Stone list of "100 Greatest Metal Albums of All Time". In a 1991 interview, Jason Newsted stated that Ride the Lightning was next to Metallica, "the best album ever". Touring After recording was completed, Music for Nations founder Martin Hooker wanted to arrange a triple-bill UK tour in March / April 1984 with Exciter, Metallica, and the Rods. The Hell on Earth Tour never materialized because of poor ticket sales. To promote Ride the Lightning, Metallica commenced the Bang That Head That Doesn't Bang European tour on November 16, in Rouen, France, with British new wave band Tank as support. The tour continued with dates in Belgium, Italy, Germany, and the Nordic countries to an average crowd of 1,300. After a Christmas break, the group embarked on a 50-date North American tour, first as a co-headlining act with W.A.S.P. and then as headliners with Armored Saint supporting. At a gig in Portland, Oregon, Metallica covered "The Money Will Roll Right In" by Fang, with Armored Saint onstage. The American leg ended in May 1985, and the band spent the following two months working on the next studio album, Master of Puppets, whose recording sessions were scheduled to begin in September. Metallica performed at the Monsters of Rock festival held at Castle Donington in England on August 17 in front of 70,000 fans. The band was placed between Ratt and Bon Jovi, two glam metal groups whose sound and appearance were much unlike Metallica's. At the start of the set, Hetfield pronounced to the audience: "If you came here to see spandex, eye make-up, and the words 'oh baby' in every fuckin' song, this ain't the fuckin' band!" Two weeks later, Metallica appeared on the Day on the Green festival in Oakland, California, before 90,000 people. The last show Metallica played before recording began was the Loreley Metal Hammer Festival in Germany, headlined by Venom. Metallica finished 1985 with a show at the Sacramento Memorial Auditorium on December 29 opening for Y&T, and a New Year's Eve concert at the Civic Auditorium in San Francisco on a bill with Metal Church, Exodus, and Megadeth, the first time Metallica and Megadeth shared a stage. At this gig, Metallica premiered "Master of Puppets" and "Disposable Heroes", songs from the then-upcoming third studio album. Track listing All lyrics written by James Hetfield (Kirk Hammett also contributed to lyrics for "Creeping Death"). The bonus tracks on the digital re-release were recorded live at the Seattle Coliseum, Seattle, Washington, on August 29 and 30, 1989, and later appeared on the live album Live Shit: Binge & Purge (1993). 2016 deluxe box set In 2016, the album was remastered and reissued in a limited-edition deluxe box set with an expanded track listing and bonus content. The deluxe edition set includes the original album on vinyl and CD, with an additional vinyl record containing a live show recorded in Los Angeles, a picture disc containing the "Creeping Death" single tracklist, six CDs of live recordings, interviews, rough mixes, and demos recorded from 1984 to 1985, and one DVD of live shows and interviews with the band. Personnel Credits are adapted from the album's liner notes. Metallica James Hetfield – lead vocals, rhythm guitar, acoustic guitar on "Fade to Black" Kirk Hammett – lead guitar, backing vocals Cliff Burton – bass, backing vocals Lars Ulrich – drums, percussion, backing vocals on "Ride the Lightning" * Jason Newsted – bass, |
Metallica. Metallica's Wherever We May Roam Tour also overlapped with Guns N' Roses' Use Your Illusion Tour. Hetfield suffered second and third degree burns to his arms, face, hands, and legs on August 8, 1992, during a Montreal show in the co-headlining Guns N' Roses/Metallica Stadium Tour. The tour included pyrotechnics, which were installed on-stage. Hetfield accidentally walked into a flame shot from a pyrotechnic during a live performance of the introduction of "Fade to Black". The show was cut short shortly after this accident, so that Guns N' Roses began their concert to malicious reactions from fans. Newsted said Hetfield's skin was "bubbling like on The Toxic Avenger". The tour recommenced on August 25 in Phoenix, and although Hetfield could sing, he could not play guitar for the remainder of the tour. Guitar technician John Marshall, who had previously filled in on rhythm guitar and was then playing in Metal Church, played guitar for the recovering Hetfield. Brazilian musician Andreas Kisser from Sepultura was initially considered to join the tour, but Marshall ultimately was chosen. The shows in Mexico City across February and March 1993 during the Nowhere Else to Roam tour were recorded, filmed and later also released as part of the band's first box set, which was released in November 1993 and titled Live Shit: Binge & Purge. The collection contained three live CDs, three home videos, and a book filled with riders and letters. Pressings of the box set since November 2002 includes two DVDs, the first one being filmed at San Diego on the Wherever We May Roam Tour, and the latter at Seattle on the Damaged Justice Tour. Binge & Purge was packaged as a cardboard box resembling that of a typical tour equipment transport box. The box set also featured a recreated copy of an access pass to the "Snakepit" part of the tour stage, as well as a cardboard drawing/airbrush stencil for the "Scary Guy" logo. The Mexico City shows were also the first time the band met future member Robert Trujillo, who was in Suicidal Tendencies at the time. The final tour supporting the album, the Shit Hits the Sheds Tour, included a performance at Woodstock '94 that followed Nine Inch Nails and preceded Aerosmith on August 13 in front of a crowd of 350,000. Some songs, such as "Enter Sandman", "Nothing Else Matters", and "Sad but True", became permanent staples of Metallica's concert setlists during these and subsequent tours. Other songs though, such as "Holier than Thou", "The God That Failed", "Through the Never", and "The Unforgiven" were no longer included in performances after 1995 and would not be played again until the 2000s, when Metallica began performing a more extensive back catalog of songs with Robert Trujillo on bass after he joined the band upon completion of the album St. Anger. After touring duties for the album were finished, Metallica filed a lawsuit against Elektra Records, which tried to force the record label to terminate the band's contract and give the band ownership of their master recordings. The band based its claim on a section of the California Labor Code that allows employees to be released from a personal services contract after seven years. Metallica had sold 40 million copies worldwide upon the filing of the suit. Metallica had been signed to the label for over a decade but was still operating under the terms of its original 1984 contract, which provided a relatively low 14% royalty rate. The band members said they were taking the action because they were ambivalent about Robert Morgado's refusal to give them another record deal along with Bob Krasnow, who retired from his job at the label shortly afterwards. Elektra responded by counter-suing the band, but in December 1994, Warner Music Group United States chairman Doug Morris offered Metallica a lucrative new deal in exchange for dropping the suit, which was reported to be even more generous than the earlier Krasnow deal. In January 1995, both parties settled out of court with a non-disclosure agreement. Metallica played the album in its entirety during the 2012 European Black Album Tour. Critical reception and legacy Metallica was met with widespread acclaim from both heavy metal journalists and mainstream publications, including NME, The New York Times, and The Village Voice. In Entertainment Weekly, David Browne called it "rock's preeminent speed-metal cyclone", and said, "Metallica may have invented a new genre: progressive thrash". Q magazine's Mark Cooper said he found the album's avoidance of metal's typically clumsy metaphors and glossy production refreshing; he said, "Metallica manage to rekindle the kind of intensity that fired the likes of Black Sabbath before metal fell in love with its own cliches". Select magazine's David Cavanagh believed the album lacks artifice and is "disarmingly genuine". In his review for Spin, Alec Foege found the music's harmonies vividly performed and said that Metallica showcase their "newfound versatility" on songs such as "The Unforgiven" and "Holier than Thou". Robert Palmer, writing in Rolling Stone, said that several songs sound like "hard-rock classics" and that, apart from "Don't Tread on Me", Metallica is an "exemplary album of mature but still kickass rock & roll". In his guide to Metallica's albums up to that point, Greg Kot of the Chicago Tribune recommended the album as "a great place for Metallica neophytes to start, with its more concise songs and explosive production." Some reviewers had reservations. Jonathan Gold, in the Los Angeles Times, said that while Metallica had embraced pop sensibilities "quite well", there was a sense the group was "no longer in love with the possibilities of its sound" on an album whose difficulty being embraced by the "metal cult" mirrored Bob Dylan going electric in the mid-1960s. More critical was Robert Christgau, who wrote in his "Consumer Guide" for The Village Voice that he "put James Hetfield out of his misery in under five plays" of the album and that he "found life getting shorter with every song". In his 2000 collection Christgau's Consumer Guide, Christgau later graded Metallica a "dud", indicating "a bad record whose details rarely merit further thought". Retrospective appraisals have been positive. In a retrospective article, Kerrang! said Metallica is the album that "propelled [the band] out of the metal ghetto to true mainstream global rock superstardom". Melody Maker said that as a deliberate departure from the band's thrash style on ...And Justice for All, "Metallica was slower, less complicated, and probably twice as heavy as anything they'd done before". In his review for BBC Music, Sid Smith said that although staunch listeners of the band accused them of selling out, Metallica confidently departed from the style of their previous albums and transitioned "from cult metal gods to bona fide rock stars". Classic Rock called it "the absolute pinnacle of Metallica's long and successful career", and credited the album for inspiring 1990s post-grunge music and convincing the music industry to embrace heavy metal as a genre with mass appeal. Author and philosopher Thomas Walker wrote in 2020, "Its success at encapsulating...[individualist] ideas in musical form | the Billboard Hot 100 singles chart and was certified Platinum by the Recording Industry Association of America (RIAA). The follow-up single, "Don't Tread on Me", was released promotionally and peaked at number 21 on the Billboard Hot Mainstream Rock Tracks singles chart. "The Unforgiven" was a Top 40 hit; it peaked in the Top 10 in Australia. Metallica was released on August 12, 1991, and was the band's first album to debut at number one on the Billboard 200, selling 598,000 copies in its first week. It was certified platinum in two weeks and spent four consecutive weeks atop the Billboard 200. Meanwhile, more singles were released to further success. "Nothing Else Matters" reached number six in the United Kingdom and Ireland, and "Wherever I May Roam" peaked at number two on the Mainstream Rock Tracks chart, although the 1993 single "Sad but True" charted only for one week on the Billboard Hot 100 at 98. Almost all singles were accompanied by music videos; the Wayne Isham-directed "Enter Sandman" promotional film won an MTV Video Music Award for Best Rock Video at the 1992 MTV Video Music Awards. Internationally, Metallica was also a success. It debuted at number one on the UK Albums Chart and was certified 2× platinum by the British Phonographic Industry (BPI) for shipping 600,000 copies in the UK. Metallica topped the charts in Australia, Canada, Germany, New Zealand, Norway, the Netherlands, Sweden, and Switzerland. It also reached the top five in Austria, Finland, and Japan, as well as the top 10 in Spain. The album failed to reach the top 20 in Ireland, having peaked at number 27. The Australian Recording Industry Association (ARIA) certified the album 12× platinum. It received diamond plaques from the Canadian Recording Industry Association (CRIA) and the Recorded Music NZ (RMNZ) for shipping a million and 150,000 copies, respectively. Logging over 488 weeks on the US Billboard 200, Metallica proved the third-longest charting album in the Nielsen SoundScan era, behind Pink Floyd's The Dark Side of the Moon and Carole King's Tapestry. In 2009, it surpassed Shania Twain's Come On Over as the best-selling album of the SoundScan era. It became the first album in the SoundScan era to pass 16 million in sales, and with 16.4 million copies sold by 2016, Metallica is the best-selling album in the United States since Nielsen SoundScan tracking began in 1991. Of that sum, 5.8 million were purchased on cassette. The album never sold fewer than 1,000 copies in a week, and moved a weekly average of 5,000 copies in 2016. Metallica was certified 16× platinum by the Recording Industry Association of America (RIAA) in 2012 for shipping sixteen million copies in the US. Metallica sold 31 million copies worldwide on physical media. All five of Metallicas singles, "Enter Sandman", "The Unforgiven", "Nothing Else Matters", "Wherever I May Roam" and "Sad but True" reached the Billboard Hot 100. Touring In 1991, for the fourth time, Metallica played as part of the Monsters of Rock festival tour. The last concert of the tour was held on September 28, 1991, at Tushino Airfield in Moscow; it was described as "the first free outdoor Western rock concert in Soviet history" and was attended by an estimated 150,000 to 500,000 people. Some unofficial estimates put the attendance as high as 1,600,000. The first tour directly intended to support the album, the Wherever We May Roam Tour, included a performance at the Freddie Mercury Tribute Concert, at which Metallica performed a short set list, consisting of "Enter Sandman", "Sad but True", and "Nothing Else Matters", along with Hetfield performed the Queen song "Stone Cold Crazy" with John Deacon, Brian May, and Roger Taylor of Queen and Tony Iommi of Black Sabbath. At one of the tour's first gigs the floor of the stage collapsed. The January 13 and 14, 1992, shows in San Diego were later released in the box set Live Shit: Binge & Purge, while the tour and the album were documented in the documentary A Year and a Half in the Life of Metallica. Metallica's Wherever We May Roam Tour also overlapped with Guns N' Roses' Use Your Illusion Tour. Hetfield suffered second and third degree burns to his arms, face, hands, and legs on August 8, 1992, during a Montreal show in the co-headlining Guns N' Roses/Metallica Stadium Tour. The tour included pyrotechnics, which were installed on-stage. Hetfield accidentally walked into a flame shot from a pyrotechnic during a live performance of the introduction of "Fade to Black". The show was cut short shortly after this accident, so that Guns N' Roses began their concert to malicious reactions from fans. Newsted said Hetfield's skin was "bubbling like on The Toxic Avenger". The tour recommenced on August 25 in Phoenix, and although Hetfield could sing, he could not play guitar for the remainder of the tour. Guitar technician John Marshall, who had previously filled in on rhythm guitar and was then playing in Metal Church, played guitar for the recovering Hetfield. Brazilian musician Andreas Kisser from Sepultura was initially considered to join the tour, but Marshall ultimately was chosen. The shows in Mexico City across February and March 1993 during the Nowhere Else to Roam tour were recorded, filmed and later also released as part of the band's first box set, which was released in November 1993 and titled Live Shit: Binge & Purge. The collection contained three live CDs, three home videos, and a book filled with riders and letters. Pressings of the box set since November 2002 includes two DVDs, the first one being filmed at San Diego on the Wherever We May Roam Tour, and the latter at Seattle on the Damaged Justice Tour. Binge & Purge was packaged as a cardboard box resembling that of a typical tour equipment transport box. The box set also featured a recreated copy of an access pass to the "Snakepit" part of the tour stage, as well as a cardboard drawing/airbrush stencil for the "Scary Guy" logo. The Mexico City shows were also the first time the band met future member Robert Trujillo, who was in Suicidal Tendencies at the time. The final tour supporting the album, the Shit Hits the Sheds Tour, included a performance at Woodstock '94 that followed Nine Inch Nails and preceded Aerosmith on August 13 in front of a crowd of 350,000. Some songs, such as "Enter Sandman", "Nothing Else Matters", and "Sad but True", became permanent staples of Metallica's concert setlists during these and subsequent tours. Other songs though, such as "Holier than Thou", "The God That Failed", "Through the Never", and "The Unforgiven" were no longer included in performances after 1995 and would not be played again until the 2000s, when Metallica began performing a more extensive back catalog of songs with Robert Trujillo on bass after he joined the band upon completion of the album St. Anger. After touring duties for the album were finished, Metallica filed a lawsuit against Elektra Records, which tried to force the record label to terminate the band's contract and give the band ownership of their master recordings. The band based its claim on a section of the California Labor Code that allows employees to be released from a personal services contract after seven years. Metallica had sold 40 million copies |
full version of the track was released on the single "The Memory Remains" as "The Outlaw Torn (Unencumbered by Manufacturing Restrictions Version)", with a running time of 10:48. An explanation on the single's back cover stated: Load was Metallica's first album on which all tracks were down-tuned to E♭ tuning. Hammett states: The band had recorded songs on earlier albums in tunings lower than E: "The God That Failed" (Metallica) was in E♭, and "Sad but True" (Metallica) and "The Thing That Should Not Be" (Master of Puppets) were in D tuning. Hetfield also felt that the change to E♭ was a bonus, as it was easier to perform string bends in the riffs. The Australian CD release of Load includes a bonus interview CD that is unavailable elsewhere. 10 songs from the album have been played live including "King Nothing", "Until It Sleeps", "Ain't My Bitch", "Bleeding Me", "Wasting My Hate", "Hero of the Day", "The Outlaw Torn", "2 X 4", "Poor Twisted Me", "Mama Said". Songs that have not been played live in their entirety are "The House Jack Built", "Cure", "Thorn Within", and "Ronnie". Artwork The cover of Load is an original artwork titled "Semen and Blood III". It is one of three photographic studies that Andres Serrano created in 1990 by mingling bovine blood and his own semen between two sheets of Plexiglas. The liner notes simply state "cover art by Andres Serrano" rather than listing the title of the work. In a 2009 interview with Classic Rock, Hetfield expressed his dislike of the album cover and its inspiration: Load also marked the first appearance of a new Metallica logo that rounded off the stabbing edges of the band's earlier logo, greatly simplifying its appearance. The M from the original logo was used to make a shuriken-like symbol known as the "ninja star", which was used as an alternate logo on this and future albums, and on related artwork. The album featured an expansive booklet containing photographs by Anton Corbijn. These photographs depict the band in various dress, including white A-shirts with suspenders, Cuban suits, and gothic. In the aforementioned 2009 interview, James Hetfield said: Reception Load received positive to mixed reviews from critics. Rolling Stone said, "The foursome dams the bombast and chugs half-speed ahead, settling into a wholly magnetizing groove that bridges old-school biker rock and the doomier side of post-grunge '90s rock." Q enthused, "These boys set up their tents in the darkest place of all, in the naked horror of their own heads... Metallica make existential metal and they've never needed the props... Metallica are still awesome... What is new is streamlined attack, the focus and, yes, the tunes." Melody Maker expressed reservations about Loads heaviness compared to its predecessors: "A Metallica album is traditionally an exhausting event. It should rock you to exhaustion, leave you brutalised and drained. This one is no exception. It is, however, the first Metallica album to make me wonder at any point, 'What the fuck was that?' It's as if the jackboot grinding the human face were to take occasional breaks for a pedicure." AllMusic considered Load repetitive, uninteresting and poorly executed. In The Village Voice, Robert Christgau said "this is just a metal record with less solo room, which is good because it concentrates their chops, and more singing, which isn't because they can't." "Some of that stuff was pretty cool," remarked Lars Ulrich of the album and its sequel. "With Load, it was disappointing that some people's reaction to the music was biased by how they dealt with the pictures – the hair and all that crap [see Artwork, above]. People have come up to me years afterwards and said, 'I never gave the record a fair chance because I couldn't get beyond Jason Newsted wearing eyeliner.' But 'The Outlaw Torn', some of that shit is pretty fucking awesome." Track listing Personnel Credits are adapted from the album's liner notes.Metallica James Hetfield – vocals, rhythm guitar, lead guitar on "2 | also felt that the change to E♭ was a bonus, as it was easier to perform string bends in the riffs. The Australian CD release of Load includes a bonus interview CD that is unavailable elsewhere. 10 songs from the album have been played live including "King Nothing", "Until It Sleeps", "Ain't My Bitch", "Bleeding Me", "Wasting My Hate", "Hero of the Day", "The Outlaw Torn", "2 X 4", "Poor Twisted Me", "Mama Said". Songs that have not been played live in their entirety are "The House Jack Built", "Cure", "Thorn Within", and "Ronnie". Artwork The cover of Load is an original artwork titled "Semen and Blood III". It is one of three photographic studies that Andres Serrano created in 1990 by mingling bovine blood and his own semen between two sheets of Plexiglas. The liner notes simply state "cover art by Andres Serrano" rather than listing the title of the work. In a 2009 interview with Classic Rock, Hetfield expressed his dislike of the album cover and its inspiration: Load also marked the first appearance of a new Metallica logo that rounded off the stabbing edges of the band's earlier logo, greatly simplifying its appearance. The M from the original logo was used to make a shuriken-like symbol known as the "ninja star", which was used as an alternate logo on this and future albums, and on related artwork. The album featured an expansive booklet containing photographs by Anton Corbijn. These photographs depict the band in various dress, including white A-shirts with suspenders, Cuban suits, and gothic. In the aforementioned 2009 interview, James Hetfield said: Reception Load received positive to mixed reviews from critics. Rolling Stone said, "The foursome dams the bombast and chugs half-speed ahead, settling into a wholly magnetizing groove that bridges old-school biker rock and the doomier side of post-grunge '90s rock." Q enthused, "These boys set up their tents in the darkest place of all, in the naked horror of their own heads... Metallica make existential metal and they've never needed the props... Metallica are still awesome... What is new is streamlined attack, the focus and, yes, the tunes." Melody Maker expressed reservations about Loads heaviness compared to its predecessors: "A Metallica album is traditionally an exhausting event. It should rock you to exhaustion, leave you brutalised and drained. This one is no exception. It is, however, the first Metallica album to make me wonder at any point, 'What the fuck was that?' It's as if the jackboot grinding the human face were to take occasional breaks for a pedicure." AllMusic considered Load repetitive, uninteresting and poorly executed. In |
front of Garage Days Re-Revisited was modified with headshots of Metallica in 1998 and the track list written on tracing paper. Reception Rolling Stone (12/10/98, print edition, p. 122) – 4 Stars (out of 5) – "Gloriously hard as the album is, you can't miss Metallica's good natured side coming through." Entertainment Weekly (12/18/98, p. 84) – "We'll have to wait until Metallica's next 'proper' album to find out if this trip to the garage recharges their batteries. Still, all things considered, Garage Inc. is an intermittently exhilarating joyride." – Rating: B− CMJ (12/21/98, p. 29) – "Those who still relate to the adolescent angst of the 'Metallicas' earliest days will find plenty to like on Garage Inc." In 2005, the album was ranked number 500 in Rock Hard magazine's book of The 500 Greatest Rock & Metal Albums of All Time. Track listing Disc one These tracks (except "Tuesday's Gone"; see below) were recorded in September–October 1998 for the Garage Inc. album. "Sabbra Cadabra" also covers part of the Black Sabbath song "A National Acrobat". "Mercyful Fate" is a medley of the songs "Satan's Fall", "Curse of the Pharaohs", "A Corpse Without Soul", "Into the Coven" and "Evil". "Tuesday's Gone" was recorded December 18, 1997, during the "Don't Call Us, We'll Call You" radio broadcast on KSJO. "The More I See" ends at 03:23 and, after a period of silence, contains a short segment of the Robin Trower song "Bridge of Sighs", from the album of the same name, as a hidden track. "Free Speech for the Dumb", "Loverman", "Astronomy", "The More I See" and "Bridge of Sighs" have never been performed live. Disc two These tracks are a collection of B-sides from artists Metallica were inspired by, throughout the early years of the band. "Last Caress/Green Hell" contains a parody of Iron Maiden's song "Run to the Hills" at the outro; Iron Maiden responded to this on a B-side cover of the Montrose song titled "Space Station No. 5". The original CD edition has a mastering error in "Green Hell" at 2:01 where the left channel glitches and is out of sync with the right channel for a second. This error does not exist on the original EP release, nor the remastered EP edition. "Am I Evil?" and "Blitzkrieg" were originally released in November 1984 as B-sides contained on the "Creeping Death" single. They were later included as bonus tracks on the 1988 Elektra re-issue of Metallica's debut album Kill 'Em All; subsequent re-issues of Kill 'Em All did not contain | of sync with the right channel for a second. This error does not exist on the original EP release, nor the remastered EP edition. "Am I Evil?" and "Blitzkrieg" were originally released in November 1984 as B-sides contained on the "Creeping Death" single. They were later included as bonus tracks on the 1988 Elektra re-issue of Metallica's debut album Kill 'Em All; subsequent re-issues of Kill 'Em All did not contain the two bonus tracks. "Breadfan" and "The Prince" were originally released by Metallica in September 1988 as B-sides to the "Harverster of Sorrow" single. "Breadfan" was also included on the "Eye of the Beholder" single. "The Prince" was also the B-side to the "One" single, as well as the bonus track on the Japanese pressing of …And Justice For All. "Stone Cold Crazy" was originally released by Metallica in September 1990 on the Rubáiyát: Elektra's 40th Anniversary compilation album, and was later included on the "Enter Sandman" single. "So What" and "Killing Time" were originally released by Metallica in November 1991 as B-sides to "The Unforgiven" single. "So What" was also on the "Sad but True" single, as well as the bonus track on the Japanese pressing of Metallica. "Motörheadache" was recorded live at The Plant Studios in December, 1995 Personnel Metallica James Hetfield – vocals, rhythm guitar, lead guitar on "Whiskey in the Jar" and "Stone Dead Forever" Lars Ulrich – drums Kirk Hammett – lead guitar, backing vocals Jason Newsted – bass, backing vocals Cliff Burton – bass on "Am I Evil?" and "Blitzkrieg" Guest musicians on "Tuesday's Gone" Pepper Keenan – co-lead vocals Jerry Cantrell – guitar Sean Kinney – additional percussion Jim Martin – guitar John Popper – harmonica Gary Rossington – additional guitar Les Claypool – banjo Technical personnel Disc I Bob Rock, James Hetfield, Lars Ulrich – production Randy Staub – engineering Brian Dobbs – additional engineering Kent Matcke, Leff Lefferts, Chris Manning – assistant engineers Paul DeCarli, Mike Gillies – digital editing Randy Staub, Mike Fraser – mixing George Marino – mastering Disc II Tracks 1–5: "Not very produced" by Metallica (Hetfield, Ulrich, Hammett, Newsted) Csaba "The Hut" Petocz |
(an abbreviation of Symphony and Metallica) is a live album by American heavy metal band Metallica, with the San Francisco Symphony conducted by Michael Kamen. It was recorded on April 21 and 22, 1999, at The Berkeley Community Theatre. This is the final Metallica album to feature bassist Jason Newsted. Album information S&M contains performances of Metallica songs with additional symphonic accompaniment, composed by Michael Kamen, who also conducted the orchestra during the concert. According to James Hetfield, the idea to combine heavy metal with an epic classical approach was Cliff Burton's idea. His love of classical music, especially of Johann Sebastian Bach, can be found in many instrumental parts and melodic characteristics in Metallica's songwriting, including songs from Ride The Lightning and Master of Puppets. Kamen, who arranged and conducted the orchestral background tracks for "Nothing Else Matters", met the band at the 1992 Grammy award show for the first time, and after hearing the "Elevator version" of the song, suggested the band perform with a whole orchestra; the band, however, did not take him up on the offer until seven years later. Lars Ulrich's favorite band Deep Purple, whom he colorfully inducted into the Rock and Roll Hall of Fame in 2016, is noted for having kicked off this kind of approach 30 years before, in Concerto for Group and Orchestra (1969), although it had actually been done multiple times before, most notably with the Moody Blues' Days of Future Passed in 1967. In addition to songs from previous albums spanning Ride the Lightning through Reload, there are two new compositions: "No Leaf Clover" and "−Human". "The Ecstasy of Gold" by Ennio Morricone, Metallica's entrance music, was played live by the orchestra. "No Leaf Clover" has since been performed by Metallica in concert, using a recording of the orchestral prelude. Changes were made to the lyrics of some songs, most notably the removal of the second verse and chorus of "The Thing That Should Not Be" and playing the third verse in its place. The "S" in the stylized "S&M" on the album cover is a backwards treble clef, while the "M" is taken from Metallica's logo. The drum kit Ulrich used on the album currently resides in a Guitar Center in San Francisco. Critical reception Rolling Stone (January 20, 2000, pp. 57–59) - 3 stars out of 5 - "...create the most crowded, ceiling-rattling basement rec room in rock....[in its] sheer awesomeness...the live performance succeeded....the monster numbers benefit from supersizing. The effect is more one of timelessness..." Spin (February 2000, pp. 114–5) – 8 out of 10 – "...makes their tempo and texture dynamics...into a topic in and off of itself, a deep evocation of bad-voodoo creeping willies culminating in 'One' and 'Enter Sandman'....Freed from ritualized superhuman extremism, it builds a soundtrack to everyday life." Entertainment Weekly (December 3, 1999, p. 102) – "Buttressed by grim strings, creaky horns, and thundering timpani, staples...creep with fearful new dimension, like an old Posada print come to life." – Rating: B Q (February 2000, p. 86) – 3 stars out of 5 – "...another just about forgivable flirtation with Spinal Tap-esque lunacy....a fine hit-heavy live LP with bolted-on bombast from the S.F. Symphony....Michael Kamen's scores swoop and soar with impressive portent throughout." CMJ (December 20, 1999, p. 24) – "...stunning....orchestral renditions of hits from the band's '90s output." S&M was included in the book 1001 Albums You Must Hear Before You Die. NME ranked the album 48th on its list of 50 Greatest Live Albums. Metal Hammer magazine named it one of the 20 best metal albums of 1999. Accolades Commercial performance S&M sold 300,000 units in the first week of release, and went on to sell a total of 2.5 million copies. As of 2003, the album had been certified 5× platinum. As of August 2013 the album had sold more than 8 million copies worldwide. 20th anniversary After Kamen's death in 2003, Metallica did not revisit the S&M concept in any further performances or recording work | conducted the orchestral background tracks for "Nothing Else Matters", met the band at the 1992 Grammy award show for the first time, and after hearing the "Elevator version" of the song, suggested the band perform with a whole orchestra; the band, however, did not take him up on the offer until seven years later. Lars Ulrich's favorite band Deep Purple, whom he colorfully inducted into the Rock and Roll Hall of Fame in 2016, is noted for having kicked off this kind of approach 30 years before, in Concerto for Group and Orchestra (1969), although it had actually been done multiple times before, most notably with the Moody Blues' Days of Future Passed in 1967. In addition to songs from previous albums spanning Ride the Lightning through Reload, there are two new compositions: "No Leaf Clover" and "−Human". "The Ecstasy of Gold" by Ennio Morricone, Metallica's entrance music, was played live by the orchestra. "No Leaf Clover" has since been performed by Metallica in concert, using a recording of the orchestral prelude. Changes were made to the lyrics of some songs, most notably the removal of the second verse and chorus of "The Thing That Should Not Be" and playing the third verse in its place. The "S" in the stylized "S&M" on the album cover is a backwards treble clef, while the "M" is taken from Metallica's logo. The drum kit Ulrich used on the album currently resides in a Guitar Center in San Francisco. Critical reception Rolling Stone (January 20, 2000, pp. 57–59) - 3 stars out of 5 - "...create the most crowded, ceiling-rattling basement rec room in rock....[in its] sheer awesomeness...the live performance succeeded....the monster numbers benefit from supersizing. The effect is more one of timelessness..." Spin (February 2000, pp. 114–5) – 8 out of 10 – "...makes their tempo and texture dynamics...into a topic in and off of itself, a deep evocation of bad-voodoo creeping willies culminating in 'One' and 'Enter Sandman'....Freed from ritualized superhuman extremism, it builds a soundtrack to everyday life." Entertainment Weekly (December 3, 1999, p. 102) – "Buttressed by grim strings, creaky horns, and thundering timpani, staples...creep with fearful new dimension, like an old Posada print come to life." – Rating: B Q (February 2000, p. 86) – 3 stars out of 5 – "...another just about forgivable flirtation with Spinal Tap-esque lunacy....a fine hit-heavy live LP with bolted-on bombast from the S.F. Symphony....Michael Kamen's scores swoop and soar with impressive portent throughout." CMJ (December 20, 1999, p. 24) – "...stunning....orchestral renditions of hits from the band's '90s output." S&M was included in the book 1001 Albums You Must Hear Before You Die. NME ranked the album 48th on its list of 50 Greatest Live Albums. Metal Hammer magazine named it one of the 20 best metal albums of 1999. Accolades Commercial performance S&M sold 300,000 units in the first week of release, and went on to sell a total of 2.5 million copies. As of 2003, the album had been certified 5× platinum. As of August 2013 the album had sold more than 8 million copies worldwide. 20th anniversary After Kamen's death in 2003, Metallica did not revisit the S&M concept in any further performances or recording work for years. However, the band announced on March 18, 2019, that they would hold a concert with the San Francisco Symphony at the Chase Center on September 6 of that year to commemorate the 20th anniversary with a single-night concert, headed by Michael Tilson Thomas as music director. They later added a second concert on September 8. The shows included |
release, Metallica embarked on the Kill 'Em All for One tour with Raven. In February 1984, Metallica supported Venom on the Seven Dates of Hell tour, during which the bands performed in front of 7,000 people at the Aardschok Festival in Zwolle, Netherlands. 1984–1986: Ride the Lightning, Master of Puppets, and Burton's death Metallica recorded its second studio album, Ride the Lightning, at Sweet Silence Studios in Copenhagen, Denmark from February to March 1984. It was released in August 1984 and reached number 100 on the Billboard 200. A French printing press mistakenly printed green covers for the album, which are now considered collectors' items. Mustaine received writing credit for "Ride the Lightning" and "The Call of Ktulu". Elektra Records A&R director Michael Alago, and co-founder of Q-Prime Management Cliff Burnstein, attended a Metallica concert in September 1984. They were impressed with the performance, signed Metallica to Elektra, and made the band a client of Q-Prime Management. Metallica's growing success was such that the band's British label Music for Nations released "Creeping Death" as a limited edition single, which sold 40,000 copies as an import in the U.S. Two of the three songs on the recordcover versions of Diamond Head's "Am I Evil?" and Blitzkrieg's "Blitzkrieg"appeared on the 1988 Elektra reissue of Kill 'Em All. Metallica embarked on its first major European tour with Tank to an average crowd of 1,300. Returning to the U.S., it embarked upon a tour co-headlining with W.A.S.P. and supported by Armored Saint. Metallica played its largest show at the Monsters of Rock festival at Donington Park, England, on August 17, 1985, with Bon Jovi and Ratt, playing to 70,000 people. At a show in Oakland, California, at the Day on the Green festival, the band played to a crowd of 60,000. Metallica's third studio album, Master of Puppets, was recorded at Sweet Silence Studios from September to December 1985 and released in March 1986. The album reached number 29 on the Billboard 200 and spent 72 weeks on the chart. It was the band's first album to be certified gold on November 4, 1986, and was certified six times platinum in 2003. Steve Huey of AllMusic considered the album "the band's greatest achievement". Following the release of the album, Metallica supported Ozzy Osbourne on a U.S. tour. Hetfield broke his wrist while skateboarding; he continued with the tour, performing vocals, with guitar technician John Marshall playing rhythm guitar. On September 27, 1986, during the European leg of Metallica's Damage, Inc. Tour, members drew cards to determine which bunks on the tour bus they would sleep in. Burton won and chose to sleep in Hammett's bunk. At around sunrise near Dörarp, Sweden, the bus driver lost control and skidded, which caused the bus to overturn several times. Ulrich, Hammett, and Hetfield sustained no serious injuries; however, Burton was pinned under the bus and died. Hetfield said: I saw the bus lying right on him. I saw his legs sticking out. I freaked. The bus driver, I recall, was trying to yank the blanket out from under him to use for other people. I just went, 'Don't fucking do that!' I already wanted to kill the [bus driver]. I don't know if he was drunk or if he hit some ice. All I knew was, he was driving and Cliff wasn't alive anymore. 1986–1994: Newsted joins, ...And Justice for All, and Metallica Burton's death left Metallica's future in doubt. The three remaining members decided Burton would want them to carry on, and with the Burton family's blessings the band sought a replacement. Roughly 40 people, including Hammett's childhood friend, Les Claypool of Primus, Troy Gregory of Prong, and Jason Newsted, formerly of Flotsam and Jetsam, auditioned for the band. Newsted learned Metallica's entire set list; after the audition Metallica invited him to Tommy's Joynt in San Francisco. Hetfield, Ulrich, and Hammett decided on Newsted as Burton's replacement; Newsted's first live performance with Metallica was at the Country Club in Reseda, California. The members initiated Newsted by tricking him into eating a ball of wasabi. The band finished its tour in February 1987. After Newsted joined Metallica, the band left their El Cerrito practice spacea suburban house formerly rented by sound engineer Mark Whitaker dubbed "the Metalli-mansion"and relocated to the adjacent cities of Berkeley and Albany before eventually settling in the Marin County city of San Rafael, north of San Francisco. In March 1987, Hetfield again broke his wrist while skateboarding, forcing the band to cancel an appearance on Saturday Night Live. In August 1987, an all-covers extended play (EP) titled The $5.98 E.P. - Garage Days Re-Revisited was released. The EP was recorded in an effort to use the band's newly constructed recording studio, test Newsted's talents, and to relieve grief and stress following the death of Burton. A video titled Cliff 'Em All commemorating Burton's three years in Metallica was released in 1987; the video included bass solos, home videos, and pictures. Metallica's first studio album since Burton's death, ...And Justice for All, was recorded from January to May 1988 and released in September. The album was a commercial success, reaching number six on the Billboard 200, and was the band's first album to enter the top 10. The album was certified platinum nine weeks after its release. There were complaints about the production; Steve Huey of AllMusic said Ulrich's drums were clicking more than thudding, and the guitars "buzz thinly". To promote the album, Metallica embarked on a tour called Damaged Justice. In 1989, Metallica received its first Grammy Award nomination for ...And Justice for All in the new Best Hard Rock/Metal Performance Vocal or Instrument category. Metallica was the favorite to win but the award was given to Jethro Tull for the album Crest of a Knave. The award was controversial with fans and the press; Metallica was standing off-stage waiting to receive the award after performing the song "One". Jethro Tull had been advised by its manager not to attend the ceremony because he was expecting Metallica to win. The award was named in Entertainment Weekly "Grammy's 10 Biggest Upsets". Following the release of ...And Justice for All, Metallica released its debut music video for the song "One", which the band performed in an abandoned warehouse. The footage was remixed with the film Johnny Got His Gun. Rather than organize an ongoing licensing deal, Metallica purchased the rights to the film. The remixed video was submitted to MTV with an alternative, performance-only version that was held back in case MTV banned the remixed version. MTV accepted the remixed version; the video was viewers' first exposure to Metallica. In 1999, it was voted number 38 in MTV's "Top 100 Videos of All Time" countdown; it was featured in the network's 25th Anniversary edition of ADD Video, which showcased the most popular videos on MTV in the last 25 years. In October 1990, Metallica entered One on One Recording's studio in North Hollywood to record its next album. Bob Rock, who had worked with Aerosmith, The Cult, Bon Jovi, and Mötley Crüe, was hired as the producer. Metallicaalso known as The Black Albumwas remixed three times, cost , and ended three marriages. Although the release was delayed until 1991, Metallica debuted at number one in ten countries, selling 650,000 units in the U.S. during its first week. The album brought Metallica mainstream attention; it has been certified 16 times platinum in the U.S., which makes it the 25th-best-selling album in the country. The making of Metallica and the following tour was documented in A Year and a Half in the Life of Metallica. The tour in support of the album, called the Wherever We May Roam Tour, lasted 14 months and included dates in the U.S., Japan, and the UK. In September 1991, 1.6 million rock music fans converged in Moscow to enjoy the first open-air rock concert to be held in the former Soviet Union, part of the Monsters of Rock series. In April 1992, Metallica appeared at The Freddie Mercury Tribute Concert and performed a three-song set. Hetfield later performed "Stone Cold Crazy" with the remaining members of Queen and Tony Iommi. On August 8, 1992, during the co-headlining Guns N' Roses/Metallica Stadium Tour, Hetfield suffered second and third degree burns to his arms, face, hands, and legs. There had been some confusion with the new pyrotechnics setup, which resulted in Hetfield walking into a flame during "Fade to Black". Newsted said Hetfield's skin was "bubbling like on The Toxic Avenger". Metallica returned to the stage 17 days later with guitar technician and Metal Church member John Marshall replacing Hetfield on guitar for the remainder of the tour, although Hetfield was able to sing. Later in 1993, Metallica went on the Nowhere Else to Roam Tour, playing five shows in Mexico City. Live Shit: Binge & Purge, the band's first box set, was released in November 1993. The collection contained three live CDs, three home videos, and a book filled with riders and letters. 1994–2001: Load, Reload, Napster controversy, and Newsted's departure After almost three years of touring to promote Metallica, including a headlining performance at Woodstock '94, Metallica returned to the studio to write and record its sixth studio album. The band went on a brief hiatus in the summer of 1995 and played a short tour, Escape from the Studio '95, comprising three outdoor shows, including a headline show at Donington Park supported by Slayer, Skid Row, Slash's Snakepit, Therapy?, and Corrosion of Conformity. The band spent about a year writing and recording new songs, resulting in the release of Load in 1996. Load debuted at number one on the Billboard 200 and ARIA Charts; it was the band's second number-one album. The cover art, Blood and Semen III, was created by Andres Serrano, who pressed a mixture of his own semen and blood between sheets of plexiglass. The release marked another change in the band's musical direction and a new image; the bandmembers' hair was cut. Metallica headlined the alternative rock festival Lollapalooza festival in mid-1996. During early production of the album, the band had recorded enough material to fill a double album. It was decided that half of the songs were to be released; the band would continue to work on the remaining songs and release them the following year. This resulted in follow-up album titled Reload. The cover art was again created by Serrano, this time using a mixture of blood and urine. Reload debuted at number one on the Billboard 200 and reached number two on the Top Canadian Album chart. Hetfield said in the 2004 documentary film Metallica: Some Kind of Monster that the band initially thought some of the songs on these albums were of average quality; these were "polished and reworked" until judged releasable. To promote Reload, Metallica performed "Fuel" and "The Memory Remains" with Marianne Faithfull on NBC's Saturday Night Live in December 1997. In 1998, Metallica compiled a double album of cover songs, Garage Inc. The first disc contained newly recorded covers of songs by Diamond Head, Killing Joke, the Misfits, Thin Lizzy, Mercyful Fate, Black Sabbath, and others. The second disc featured the original version of The $5.98 E.P. - Garage Days Re-Revisited, which had become a scarce collectors' item. The album entered the Billboard 200 at number two. On April 21 and 22, 1999, Metallica recorded two performances with the San Francisco Symphony conducted by Michael Kamen, who had previously worked with producer Rock on "Nothing Else Matters". Kamen approached Metallica in 1991 with the idea of pairing the band's music with a symphony orchestra. Kamen and his staff of over 100 composed additional orchestral material for Metallica songs. Metallica wrote two new Kamen-scored songs for the event, "No Leaf Clover" and "-Human". The audio recording and concert footage were released in 1999 as the album and concert film S&M. It entered the Billboard 200 at number two and the Australian ARIA charts and Top Internet Albums chart at number one. In 2000, Metallica discovered that a demo of its song "I Disappear", which was supposed to be released in combination with the Mission: Impossible II soundtrack, was receiving radio airplay. Tracing the source of the leak, the band found the file on the Napster peer-to-peer file-sharing network, and also found that the band's entire catalogue was freely available. Metallica filed a lawsuit at the U.S. District Court, Central District of California, alleging that Napster violated three areas of the law: copyright infringement, unlawful use of digital audio interface device, and the Racketeer Influenced and Corrupt Organizations Act (RICO). Ulrich provided a statement to the Senate Judiciary Committee regarding copyright infringement on July 11, 2000. Federal Judge Marilyn Hall Patel ordered the site to place a filter on the program within 72 hours or be shut down. A settlement between Metallica and Napster was reached when German media conglomerate Bertelsmann BMG showed interest in purchasing the rights to Napster for $94 million. Under the terms of settlement, Napster agreed to block users who shared music by artists who do not want their music shared. On June 3, 2002, Napster filed for Chapter 11 protection under U.S. bankruptcy laws. On September 3, 2002, an American bankruptcy judge blocked the sale of Napster to Bertelsmann and forced Napster to liquidate its assets according to Chapter 7 of the U.S. bankruptcy laws. At the 2000 MTV Video Music Awards, Ulrich appeared with host Marlon Wayans in a skit that criticized the idea of using Napster to share music. Wayans played a college student listening to Metallica's "I Disappear". Ulrich walked in and asked for an explanation. Ulrich responded to Wayans' excuse that using Napster was just "sharing" by saying that Wayans' idea of sharing was "borrowing things that were not yours without asking". He called in the Metallica road crew, who proceeded to confiscate all of Wayans' belongings, leaving him almost naked in an empty room. Napster creator Shawn Fanning responded later in the ceremony by presenting an award wearing a Metallica shirt, saying, "I borrowed this shirt from a friend. Maybe, if I like it, I'll buy one of my own." Ulrich was later booed on stage at the award show when he introduced the final musical act, Blink-182. Newsted left Metallica on January 17, 2001, as plans were being made to enter the recording studio. He said he left the band for "private and personal reasons, and the physical damage I have done to myself over the years while playing the music that I love". During a Playboy interview with Metallica, Newsted said he wanted to release an album with his side project, Echobrain. Hetfield was opposed to the idea and said, "When someone does a side project, it takes away from the strength of Metallica", and that a side project is "like cheating on your wife in a way". Newsted said Hetfield had recorded vocals for a song used in the film South Park: Bigger, Longer & Uncut, and appeared on two Corrosion of Conformity albums. Hetfield replied, "My name isn't on those records. And I'm not out trying to sell them", and raised questions such as, "Where would it end? Does he start touring with it? Does he sell shirts? Is it his band?" 2001–2006: Some Kind of Monster, St. Anger, and Trujillo joins In April 2001, filmmakers Joe Berlinger and Bruce Sinofsky began following Metallica to document the recording process of the band's next studio album. Over two years they recorded more than 1,000 hours of footage. On July 19, 2001, before preparations to enter the recording studio, Hetfield entered rehab to treat his "alcoholism and other addictions". All recording plans were put on hold and the band's future was in doubt. Hetfield left rehab on December 4, 2001, and the band returned to the recording studio on April 12, 2002. Hetfield was required to limit his work to four hours a day between noon and 4 pm, and to spend the rest of his time with his family. The footage recorded by Berlinger and Sinofsky was compiled into the documentary Metallica: Some Kind of Monster, which premiered at the Sundance Film Festival in January 2004. In the documentary, Newsted said his former bandmates' decision to hire a therapist to help solve their problems which he felt they could have solved on their own was "really fucking lame and weak". In June 2003, Metallica's eighth studio album, St. Anger, debuted at number one on the Billboard 200, and drew mixed reactions from critics. Ulrich's "steely" sounding snare drum and the absence of guitar solos received particular criticism. Kevin Forest Moreau of Shakingthrough.net said, "the guitars stumble in a monotone of mid-level, processed rattle; the drums don't propel as much as struggle to disguise an all-too-turgid pace; and the rage is both unfocused and leavened with too much narcissistic navel-gazing". Brent DiCrescenzo of Pitchfork described it as "an utter mess". However, Blender magazine called it the "grimiest and grimmest of the band's Bob Rock productions", and New York Magazine called it "utterly raw and rocking". The title track, "St. Anger", won the Grammy Award for Best Metal Performance in 2004; it was used as the official theme song for WWE's SummerSlam 2003. For the duration of St. Angers recording period, producer Bob Rock played bass on the album and in several live shows at which Metallica performed during that time. Once the record was completed, the band started to hold auditions for Newsted's permanent replacement. Bassists Pepper Keenan, Jeordie White, Scott Reeder, Eric Avery, Danny Lohner, and Chris Wyseamong othersauditioned for the role. After three months of auditions, Robert Trujillo, formerly of Suicidal Tendencies and Ozzy Osbourne's band, was chosen as the new bassist. Newsted, who had joined Canadian thrash metal band Voivod by that time, was Trujillo's replacement in Osbourne's band during the 2003 Ozzfest tour, which included Voivod. Before the band's set at the 2004 Download Festival, Ulrich was rushed to the hospital after having an anxiety seizure and was unable to perform. Hetfield searched for last-minute volunteers to replace Ulrich. Slayer drummer Dave Lombardo and Slipknot drummer Joey Jordison volunteered. Lombardo performed "Battery" and "The Four Horsemen", Ulrich's drum technician Flemming Larsen performed "Fade to Black", and Jordison performed the remainder of the set. Having toured for two years in support of St. Anger on the Summer Sanitarium Tour 2003 and the Madly in Anger with the World Tour, with multi-platinum rock band Godsmack in support, Metallica took a break from performing and spent most of 2005 with friends and family. The band opened for The Rolling Stones at SBC Park in San Francisco on November 13 and 15, 2005. 2006–2013: Death Magnetic and Rock and Roll Hall of Fame induction In February 2006, Metallica announced on its official website that after 15 years, long-time producer Bob Rock would not be producing the band's next studio album. Instead, the band chose to work with producer Rick Rubin. Around the same time, a petition signed by 1,500 fans was posted online in an attempt to encourage the band to prohibit Rock from producing Metallica albums, saying he had too much influence on the band's sound and musical direction. Rock said the petition hurt his children's feelings; he said, "sometimes, even with a great coach, a team keeps losing. You have to get new blood in there." In December 2006, Metallica released a DVD titled The Videos 1989–2004, which sold 28,000 copies in its first week and entered the Billboard Top Videos chart at number three. Metallica recorded a guitar-based interpretation of Ennio Morricone's "The Ecstasy of Gold" for a tribute album titled We All Love Ennio Morricone, which was released in February 2007. The track received a Grammy nomination at the 50th Grammy Awards for the category "Best Rock Instrumental Performance". A recording of "The Ecstasy of Gold" has been played to introduce Metallica's performances since the 1980s. Metallica scheduled the release of the album Death Magnetic as September 12, 2008, and the band filmed a music video for the album's first single, "The Day That Never Comes". On September 2, 2008, a record store in France | would want them to carry on, and with the Burton family's blessings the band sought a replacement. Roughly 40 people, including Hammett's childhood friend, Les Claypool of Primus, Troy Gregory of Prong, and Jason Newsted, formerly of Flotsam and Jetsam, auditioned for the band. Newsted learned Metallica's entire set list; after the audition Metallica invited him to Tommy's Joynt in San Francisco. Hetfield, Ulrich, and Hammett decided on Newsted as Burton's replacement; Newsted's first live performance with Metallica was at the Country Club in Reseda, California. The members initiated Newsted by tricking him into eating a ball of wasabi. The band finished its tour in February 1987. After Newsted joined Metallica, the band left their El Cerrito practice spacea suburban house formerly rented by sound engineer Mark Whitaker dubbed "the Metalli-mansion"and relocated to the adjacent cities of Berkeley and Albany before eventually settling in the Marin County city of San Rafael, north of San Francisco. In March 1987, Hetfield again broke his wrist while skateboarding, forcing the band to cancel an appearance on Saturday Night Live. In August 1987, an all-covers extended play (EP) titled The $5.98 E.P. - Garage Days Re-Revisited was released. The EP was recorded in an effort to use the band's newly constructed recording studio, test Newsted's talents, and to relieve grief and stress following the death of Burton. A video titled Cliff 'Em All commemorating Burton's three years in Metallica was released in 1987; the video included bass solos, home videos, and pictures. Metallica's first studio album since Burton's death, ...And Justice for All, was recorded from January to May 1988 and released in September. The album was a commercial success, reaching number six on the Billboard 200, and was the band's first album to enter the top 10. The album was certified platinum nine weeks after its release. There were complaints about the production; Steve Huey of AllMusic said Ulrich's drums were clicking more than thudding, and the guitars "buzz thinly". To promote the album, Metallica embarked on a tour called Damaged Justice. In 1989, Metallica received its first Grammy Award nomination for ...And Justice for All in the new Best Hard Rock/Metal Performance Vocal or Instrument category. Metallica was the favorite to win but the award was given to Jethro Tull for the album Crest of a Knave. The award was controversial with fans and the press; Metallica was standing off-stage waiting to receive the award after performing the song "One". Jethro Tull had been advised by its manager not to attend the ceremony because he was expecting Metallica to win. The award was named in Entertainment Weekly "Grammy's 10 Biggest Upsets". Following the release of ...And Justice for All, Metallica released its debut music video for the song "One", which the band performed in an abandoned warehouse. The footage was remixed with the film Johnny Got His Gun. Rather than organize an ongoing licensing deal, Metallica purchased the rights to the film. The remixed video was submitted to MTV with an alternative, performance-only version that was held back in case MTV banned the remixed version. MTV accepted the remixed version; the video was viewers' first exposure to Metallica. In 1999, it was voted number 38 in MTV's "Top 100 Videos of All Time" countdown; it was featured in the network's 25th Anniversary edition of ADD Video, which showcased the most popular videos on MTV in the last 25 years. In October 1990, Metallica entered One on One Recording's studio in North Hollywood to record its next album. Bob Rock, who had worked with Aerosmith, The Cult, Bon Jovi, and Mötley Crüe, was hired as the producer. Metallicaalso known as The Black Albumwas remixed three times, cost , and ended three marriages. Although the release was delayed until 1991, Metallica debuted at number one in ten countries, selling 650,000 units in the U.S. during its first week. The album brought Metallica mainstream attention; it has been certified 16 times platinum in the U.S., which makes it the 25th-best-selling album in the country. The making of Metallica and the following tour was documented in A Year and a Half in the Life of Metallica. The tour in support of the album, called the Wherever We May Roam Tour, lasted 14 months and included dates in the U.S., Japan, and the UK. In September 1991, 1.6 million rock music fans converged in Moscow to enjoy the first open-air rock concert to be held in the former Soviet Union, part of the Monsters of Rock series. In April 1992, Metallica appeared at The Freddie Mercury Tribute Concert and performed a three-song set. Hetfield later performed "Stone Cold Crazy" with the remaining members of Queen and Tony Iommi. On August 8, 1992, during the co-headlining Guns N' Roses/Metallica Stadium Tour, Hetfield suffered second and third degree burns to his arms, face, hands, and legs. There had been some confusion with the new pyrotechnics setup, which resulted in Hetfield walking into a flame during "Fade to Black". Newsted said Hetfield's skin was "bubbling like on The Toxic Avenger". Metallica returned to the stage 17 days later with guitar technician and Metal Church member John Marshall replacing Hetfield on guitar for the remainder of the tour, although Hetfield was able to sing. Later in 1993, Metallica went on the Nowhere Else to Roam Tour, playing five shows in Mexico City. Live Shit: Binge & Purge, the band's first box set, was released in November 1993. The collection contained three live CDs, three home videos, and a book filled with riders and letters. 1994–2001: Load, Reload, Napster controversy, and Newsted's departure After almost three years of touring to promote Metallica, including a headlining performance at Woodstock '94, Metallica returned to the studio to write and record its sixth studio album. The band went on a brief hiatus in the summer of 1995 and played a short tour, Escape from the Studio '95, comprising three outdoor shows, including a headline show at Donington Park supported by Slayer, Skid Row, Slash's Snakepit, Therapy?, and Corrosion of Conformity. The band spent about a year writing and recording new songs, resulting in the release of Load in 1996. Load debuted at number one on the Billboard 200 and ARIA Charts; it was the band's second number-one album. The cover art, Blood and Semen III, was created by Andres Serrano, who pressed a mixture of his own semen and blood between sheets of plexiglass. The release marked another change in the band's musical direction and a new image; the bandmembers' hair was cut. Metallica headlined the alternative rock festival Lollapalooza festival in mid-1996. During early production of the album, the band had recorded enough material to fill a double album. It was decided that half of the songs were to be released; the band would continue to work on the remaining songs and release them the following year. This resulted in follow-up album titled Reload. The cover art was again created by Serrano, this time using a mixture of blood and urine. Reload debuted at number one on the Billboard 200 and reached number two on the Top Canadian Album chart. Hetfield said in the 2004 documentary film Metallica: Some Kind of Monster that the band initially thought some of the songs on these albums were of average quality; these were "polished and reworked" until judged releasable. To promote Reload, Metallica performed "Fuel" and "The Memory Remains" with Marianne Faithfull on NBC's Saturday Night Live in December 1997. In 1998, Metallica compiled a double album of cover songs, Garage Inc. The first disc contained newly recorded covers of songs by Diamond Head, Killing Joke, the Misfits, Thin Lizzy, Mercyful Fate, Black Sabbath, and others. The second disc featured the original version of The $5.98 E.P. - Garage Days Re-Revisited, which had become a scarce collectors' item. The album entered the Billboard 200 at number two. On April 21 and 22, 1999, Metallica recorded two performances with the San Francisco Symphony conducted by Michael Kamen, who had previously worked with producer Rock on "Nothing Else Matters". Kamen approached Metallica in 1991 with the idea of pairing the band's music with a symphony orchestra. Kamen and his staff of over 100 composed additional orchestral material for Metallica songs. Metallica wrote two new Kamen-scored songs for the event, "No Leaf Clover" and "-Human". The audio recording and concert footage were released in 1999 as the album and concert film S&M. It entered the Billboard 200 at number two and the Australian ARIA charts and Top Internet Albums chart at number one. In 2000, Metallica discovered that a demo of its song "I Disappear", which was supposed to be released in combination with the Mission: Impossible II soundtrack, was receiving radio airplay. Tracing the source of the leak, the band found the file on the Napster peer-to-peer file-sharing network, and also found that the band's entire catalogue was freely available. Metallica filed a lawsuit at the U.S. District Court, Central District of California, alleging that Napster violated three areas of the law: copyright infringement, unlawful use of digital audio interface device, and the Racketeer Influenced and Corrupt Organizations Act (RICO). Ulrich provided a statement to the Senate Judiciary Committee regarding copyright infringement on July 11, 2000. Federal Judge Marilyn Hall Patel ordered the site to place a filter on the program within 72 hours or be shut down. A settlement between Metallica and Napster was reached when German media conglomerate Bertelsmann BMG showed interest in purchasing the rights to Napster for $94 million. Under the terms of settlement, Napster agreed to block users who shared music by artists who do not want their music shared. On June 3, 2002, Napster filed for Chapter 11 protection under U.S. bankruptcy laws. On September 3, 2002, an American bankruptcy judge blocked the sale of Napster to Bertelsmann and forced Napster to liquidate its assets according to Chapter 7 of the U.S. bankruptcy laws. At the 2000 MTV Video Music Awards, Ulrich appeared with host Marlon Wayans in a skit that criticized the idea of using Napster to share music. Wayans played a college student listening to Metallica's "I Disappear". Ulrich walked in and asked for an explanation. Ulrich responded to Wayans' excuse that using Napster was just "sharing" by saying that Wayans' idea of sharing was "borrowing things that were not yours without asking". He called in the Metallica road crew, who proceeded to confiscate all of Wayans' belongings, leaving him almost naked in an empty room. Napster creator Shawn Fanning responded later in the ceremony by presenting an award wearing a Metallica shirt, saying, "I borrowed this shirt from a friend. Maybe, if I like it, I'll buy one of my own." Ulrich was later booed on stage at the award show when he introduced the final musical act, Blink-182. Newsted left Metallica on January 17, 2001, as plans were being made to enter the recording studio. He said he left the band for "private and personal reasons, and the physical damage I have done to myself over the years while playing the music that I love". During a Playboy interview with Metallica, Newsted said he wanted to release an album with his side project, Echobrain. Hetfield was opposed to the idea and said, "When someone does a side project, it takes away from the strength of Metallica", and that a side project is "like cheating on your wife in a way". Newsted said Hetfield had recorded vocals for a song used in the film South Park: Bigger, Longer & Uncut, and appeared on two Corrosion of Conformity albums. Hetfield replied, "My name isn't on those records. And I'm not out trying to sell them", and raised questions such as, "Where would it end? Does he start touring with it? Does he sell shirts? Is it his band?" 2001–2006: Some Kind of Monster, St. Anger, and Trujillo joins In April 2001, filmmakers Joe Berlinger and Bruce Sinofsky began following Metallica to document the recording process of the band's next studio album. Over two years they recorded more than 1,000 hours of footage. On July 19, 2001, before preparations to enter the recording studio, Hetfield entered rehab to treat his "alcoholism and other addictions". All recording plans were put on hold and the band's future was in doubt. Hetfield left rehab on December 4, 2001, and the band returned to the recording studio on April 12, 2002. Hetfield was required to limit his work to four hours a day between noon and 4 pm, and to spend the rest of his time with his family. The footage recorded by Berlinger and Sinofsky was compiled into the documentary Metallica: Some Kind of Monster, which premiered at the Sundance Film Festival in January 2004. In the documentary, Newsted said his former bandmates' decision to hire a therapist to help solve their problems which he felt they could have solved on their own was "really fucking lame and weak". In June 2003, Metallica's eighth studio album, St. Anger, debuted at number one on the Billboard 200, and drew mixed reactions from critics. Ulrich's "steely" sounding snare drum and the absence of guitar solos received particular criticism. Kevin Forest Moreau of Shakingthrough.net said, "the guitars stumble in a monotone of mid-level, processed rattle; the drums don't propel as much as struggle to disguise an all-too-turgid pace; and the rage is both unfocused and leavened with too much narcissistic navel-gazing". Brent DiCrescenzo of Pitchfork described it as "an utter mess". However, Blender magazine called it the "grimiest and grimmest of the band's Bob Rock productions", and New York Magazine called it "utterly raw and rocking". The title track, "St. Anger", won the Grammy Award for Best Metal Performance in 2004; it was used as the official theme song for WWE's SummerSlam 2003. For the duration of St. Angers recording period, producer Bob Rock played bass on the album and in several live shows at which Metallica performed during that time. Once the record was completed, the band started to hold auditions for Newsted's permanent replacement. Bassists Pepper Keenan, Jeordie White, Scott Reeder, Eric Avery, Danny Lohner, and Chris Wyseamong othersauditioned for the role. After three months of auditions, Robert Trujillo, formerly of Suicidal Tendencies and Ozzy Osbourne's band, was chosen as the new bassist. Newsted, who had joined Canadian thrash metal band Voivod by that time, was Trujillo's replacement in Osbourne's band during the 2003 Ozzfest tour, which included Voivod. Before the band's set at the 2004 Download Festival, Ulrich was rushed to the hospital after having an anxiety seizure and was unable to perform. Hetfield searched for last-minute volunteers to replace Ulrich. Slayer drummer Dave Lombardo and Slipknot drummer Joey Jordison volunteered. Lombardo performed "Battery" and "The Four Horsemen", Ulrich's drum technician Flemming Larsen performed "Fade to Black", and Jordison performed the remainder of the set. Having toured for two years in support of St. Anger on the Summer Sanitarium Tour 2003 and the Madly in Anger with the World Tour, with multi-platinum rock band Godsmack in support, Metallica took a break from performing and spent most of 2005 with friends and family. The band opened for The Rolling Stones at SBC Park in San Francisco on November 13 and 15, 2005. 2006–2013: Death Magnetic and Rock and Roll Hall of Fame induction In February 2006, Metallica announced on its official website that after 15 years, long-time producer Bob Rock would not be producing the band's next studio album. Instead, the band chose to work with producer Rick Rubin. Around the same time, a petition signed by 1,500 fans was posted online in an attempt to encourage the band to prohibit Rock from producing Metallica albums, saying he had too much influence on the band's sound and musical direction. Rock said the petition hurt his children's feelings; he said, "sometimes, even with a great coach, a team keeps losing. You have to get new blood in there." In December 2006, Metallica released a DVD titled The Videos 1989–2004, which sold 28,000 copies in its first week and entered the Billboard Top Videos chart at number three. Metallica recorded a guitar-based interpretation of Ennio Morricone's "The Ecstasy of Gold" for a tribute album titled We All Love Ennio Morricone, which was released in February 2007. The track received a Grammy nomination at the 50th Grammy Awards for the category "Best Rock Instrumental Performance". A recording of "The Ecstasy of Gold" has been played to introduce Metallica's performances since the 1980s. Metallica scheduled the release of the album Death Magnetic as September 12, 2008, and the band filmed a music video for the album's first single, "The Day That Never Comes". On September 2, 2008, a record store in France began selling copies of Death Magnetic nearly two weeks before its scheduled worldwide release date, which resulted in the album being made available on peer-to-peer clients. This prompted the band's UK distributor Vertigo Records to officially release the album on September 10, 2008. Rumors of Metallica or Warner Bros. taking legal action against the French retailer were unconfirmed, though drummer Lars Ulrich responded to the leak by saying, "...We're ten days from release. I mean, from here, we're golden. If this thing leaks all over the world today or tomorrow, happy days. Happy days. Trust me", and, "By 2008 standards, that's a victory. If you'd told me six months ago that our record wouldn't leak until 10 days out, I would have signed up for that." Death Magnetic debuted at number one in the U.S. selling 490,000 units; Metallica became the first band to have five consecutive studio albums debut at number one in the history of the Billboard 200. A week after its release, Death Magnetic remained at number one on the Billboard 200 and the European album chart; it also became the fastest selling album of 2008 in Australia. Death Magnetic remained at number one on the Billboard 200 album chart for three consecutive weeks. Metallica was one of two artists whose albumthe other being Jack Johnson's album Sleep Through the Staticremained on the Billboard 200 for three consecutive weeks at number one in 2008. Death Magnetic also remained at number one on Billboards Hard Rock, Modern Rock/Alternative and Rock album charts for five consecutive weeks. The album reached number one in 32 countries outside the U.S., including the UK, Canada, and Australia. In November 2008, Metallica's record deal with Warner Bros. ended and the band considered releasing its next album through the internet. On January 14, 2009, it was announced that Metallica would be inducted into the Rock and Roll Hall of Fame on April 4, 2009, and that former bassist Jason Newstedwho left the band in 2001would perform with the band at the ceremony. Initially, it was announced that the matter had been discussed and that bassist Trujillo had agreed not to play because he "wanted to see the Black Album band". However, during the band's set of "Master of Puppets" and "Enter Sandman", both Trujillo and Newsted were on stage. Ray Burton, father of the late Cliff Burton, accepted the honor on his behalf. Although he was not to be inducted with them, Metallica invited Dave Mustaine to take part in the induction ceremony. Mustaine declined because of his touring commitments in Europe. Metallica, Slayer, Megadeth, and Anthrax performed on the same bill for the first time on June 16, 2010, at Warsaw Babice Airport, Warsaw, as a part of the Sonisphere Festival series. The show in Sofia, Bulgaria, on June 22, 2010, was broadcast via satellite to cinemas. The bands also played concerts in Bucharest on June 26, 2010, and Istanbul on June 27, 2010. On June 28, 2010, Death Magnetic was certified double platinum by the RIAA. Metallica's World Magnetic Tour ended in Melbourne on November 21, 2010. The band had been touring for over two years in support of Death Magnetic. To accompany the final tour dates in Australia and New Zealand, a live, limited edition EP of past performances in Australia called Six Feet Down Under was released. The EP was followed by Six Feet Down Under (Part II), which was released on November 12, 2010. Part 2 contains a further eight songs recorded during the first two Oceanic Legs of the World Magnetic Tour. On November 26, 2010, Metallica released a live EP titled Live at Grimey's, which was recorded in June 2008 at Grimey's Record Store, just before the band's appearance at Bonnaroo Music Festival that year. In a June 2009 interview with Italy's Rock TV, Ulrich said Metallica was planning to continue touring until August 2010, and that there were no plans for a tenth album. He said he was sure the band would collaborate with producer Rick Rubin again. According to Blabbermouth.net, the band was considering recording its next album in the second half of 2011. In November 2010, during an interview with The Pulse of Radio, Ulrich said Metallica would return to writing in 2011. Ulrich said, "There's a bunch of balls in the air for 2011, but I think the main one is we really want to get back to writing again. We haven't really written since, what, '06, '07, and we want to get back to kind of just being creative again. Right now we are going to just chill out and then probably start up again in, I'd say, March or April, and start probably putting the creative cap back on and start writing some songs." On November 9, 2010, Metallica announced it would be headlining the Rock in Rio festival in Rio de Janeiro on September 25, 2011. On December 13, 2010, the band announced it would again play as part of the "big four" during the Sonisphere Festival at Knebworth House, Hertfordshire, on July 8, 2011. It was the first time all of the "big four" members played on the same stage in the UK. On December 17, 2010, Another "big four" Sonisphere performance that would take place in France on July 9 was announced. On January 25, 2011, another "big four" performance on April 23, 2011, at the Empire Polo Club in Indio, California, was announced. It was the first time all of the "big four" members played on the same stage in the U.S. On February 17, 2011, a show in Gelsenkirchen, Germany, on July 2, 2011, was announced. On February 22, a "big four" show in Milan on July 6, 2011, was announced. On March 2, 2011, another "big four" concert, which took place in Gothenburg on July 3, 2011, was announced. The final "big four" concert was in New York City, at Yankee Stadium, on September 14, 2011. In an interview at the April 2011 Big Four concert, Robert Trujillo said Metallica will work with Rick Rubin again as producer for the new album and were "really excited to write some new music. There's no shortage of riffage in Metallica world right now." He added, "The first album with Rick was also the first album for me, so in a lot of ways, you're kind of testing the water. Now that we're comfortable with Rick and his incredible engineer, Greg Fidelman, who worked with Slayer, actually, on this last recordit's my heroit's a great team. And it's only gonna better; I really believe that. So I'm super-excited." In June 2011, Rubin said Metallica had begun writing its new album. On June 15, 2011, Metallica announced that recording sessions with singer-songwriter Lou Reed had concluded. The album, which was titled Lulu, was recorded over several months and comprised ten songs based on Frank Wedekind's "Lulu" plays Earth Spirit and Pandora's Box. The album was released on October 31, 2011. The recording of the album was problematic at times; Lars Ulrich later said Lou Reed challenged him to a "street fight". On October 16, 2011, Robert Trujillo confirmed that the band was back in the studio and writing new material. He said, "The writing process for the new Metallica album has begun. We've been in the studio with Rick Rubin, working on a couple of things, and we're going to be recording during the most of next year." Metallica was due to make its first appearance in India at the "India Rocks" concert, supporting the 2011 Indian Grand Prix. However, the concert was canceled when the venue was proven to be unsafe. Fans raided the stage during the event and the organizers were later arrested for fraud. Metallica made its Indian debut in Bangalore on October 30, 2011. On November 10, it was announced that Metallica would headline the main stage on Saturday June 9, 2012, at the Download Festival at Donington Park and that the band would play The Black Album in its entirety. Metallica celebrated its 30th anniversary by playing four shows at the Fillmore in San Francisco in December 2011. The shows were exclusive to Met Club members and tickets were charged at $6 each or $19.81 for all four nights. The shows consisted of songs from the band's career and featured guest appearances by artists who had either helped or had influenced Metallica. These shows were notable because Lloyd Grant, Dave Mustaine, Jason Newsted, Glenn Danzig, Ozzy Osbourne, Jerry Cantrell, Apocalyptica, members of Diamond Head, and King Diamond joined Metallica on stage for all appropriate songs. In December 2011, Metallica began releasing songs that were written for Death Magnetic but were not included on the album online. On December 13, 2011, the band released Beyond Magnetic, a digital EP release exclusively on iTunes. It was released on CD in January 2012. On February 7, 2012, Metallica announced that it would start a new music festival called Orion Music + More, which took place on June 23 and 24, 2012, in Atlantic City. Metallica also confirmed that it would headline the festival on both days and would perform two of its most critically acclaimed albums in their entirety: The Black Album on one night, and Ride the Lightning on the other. In a July 2012 interview with Canadian radio station 99.3 The Fox, Ulrich said Metallica would not release its new album until at least early 2014. In November 2012, Metallica left Warner Bros. Records and launched an independent record label, Blackened Recordings, which will produce the band's future releases. The band acquired the rights to all of its studio albums, which were all reissued through the new label. Blackened releases were licensed through Warner subsidiary Rhino Entertainment in North America and internationally through Universal Music. On September 20, 2012, Metallica announced via its official website that a new DVD containing footage of shows it performed in Quebec in 2009 would be released that December; fans would get the chance to vote for two setlists that would appear on the DVD. The film, titled Quebec Magnetic, was released in the U.S. on December 10, 2012. 2013–2019: Metallica: Through the Never and Hardwired... to Self-Destruct In an interview with Classic Rock on January 8, 2013, Ulrich said regarding the band's upcoming album, "What we're doing now certainly sounds like a continuation [of Death Magnetic]". He also said, "I love Rick [Rubin]. We all love Rick. We're in touch with Rick constantly. We'll see where it goes. It would stun me if the record came out in 2013." Also in 2013, the band starred in a 3D concert film titled Metallica: Through the Never, which was directed by Antal Nimród and was released in IMAX theaters on September 27. In an interview dated July 22, 2013, Ulrich told Ultimate |
mural began to be used at the beginning of the 20th century. In 1906, Dr. Atl issued a manifesto calling for the development of a monumental public art movement in Mexico; he named it in Spanish pintura mural (English: wall painting). In ancient Roman times, a mural crown was given to the fighter who was first to scale the wall of a besieged town. "Mural" comes from the Latin muralis, meaning "wall." History Antique art Murals of sorts date to Upper Paleolithic times such as the cave paintings in the Lubang Jeriji Saléh cave in Borneo (40,000-52,000 BP), Chauvet Cave in Ardèche department of southern France (around 32,000 BP). Many ancient murals have been found within ancient Egyptian tombs (around 3150 BC), the Minoan palaces (Middle period III of the Neopalatial period, 1700–1600 BC), the Oxtotitlán cave and Juxtlahuaca in Mexico (around 1200-900 BC) and in Pompeii (around 100 BC – AD 79). During the Middle Ages murals were usually executed on dry plaster (secco). The huge collection of Kerala mural painting dating from the 14th century are examples of fresco secco. In Italy, circa 1300, the technique of painting of frescos on wet plaster was reintroduced and led to a significant increase in the quality of mural painting. Modern art The term mural became better known with the Mexican muralism art movement (Diego Rivera, David Siqueiros and José Orozco). There are many different styles and techniques. The best-known is probably fresco, which uses water-soluble paints with a damp lime wash, rapid use of the resulting mixture over a large surface, and often in parts (but with a sense of the whole). The colors lighten as they dry. The marouflage method has also been used for millennia. Murals today are painted in a variety of ways, using oil or water-based media. The styles can vary from abstract to trompe-l'œil (a French term for "fool" or "trick the eye"). Initiated by the works of mural artists like Graham Rust or Rainer Maria Latzke in the 1980s, trompe-l'œil painting has experienced a renaissance in private and public buildings in Europe. Today, the beauty of a wall mural has become much more widely available with a technique whereby a painting or photographic image is transferred to poster paper or canvas which is then pasted to a wall surface (see wallpaper, Frescography) to give the effect of either a hand-painted mural or realistic scene. A special type of mural painting is Lüftlmalerei, still practised today in the villages of the Alpine valleys. Well-known examples of such façade designs from the 18th and 19th centuries can be found in Mittenwald, Garmisch, Unter- and Oberammergau. Technique In the history of mural several methods have been used: A fresco painting, from the Italian word affresco which derives from the adjective fresco ("fresh"), describes a method in which the paint is applied on plaster on walls or ceilings. The buon fresco technique consists of painting in pigment mixed with water on a thin layer of wet, fresh, lime mortar or plaster. The pigment is then absorbed by the wet plaster; after a number of hours, the plaster dries and reacts with the air: it is this chemical reaction which fixes the pigment particles in the plaster. After this the painting stays for a long time up to centuries in fresh and brilliant colors. Fresco-secco painting is done on dry plaster (secco is "dry" in Italian). The pigments thus require a binding medium, such as egg (tempera), glue or oil to attach the pigment to the wall. Mezzo-fresco is painted on nearly-dry plaster, and was defined by the sixteenth-century author Ignazio Pozzo as "firm enough not to take a thumb-print" so that the pigment only penetrates slightly into the plaster. By the end of the sixteenth century this had largely displaced the buon fresco method, and was used by painters such as Gianbattista Tiepolo or Michelangelo. This technique had, in reduced form, the advantages of a secco work. Material In Greco-Roman times, mostly encaustic colors applied in a cold state were used. Tempera painting is one of the oldest known methods in mural painting. In tempera, the pigments are bound in an albuminous medium such as egg yolk or egg white diluted in water. In 16th-century Europe, oil painting on canvas arose as an easier method for mural painting. The advantage was that the artwork could be completed in the artist's studio and later transported to its destination and there attached to the wall or ceiling. Oil paint may be a less satisfactory medium for murals because of its lack of brilliance in colour. Also, the pigments are yellowed by the binder or are more easily affected by atmospheric conditions. Different muralists tend to become experts in their preferred medium and application, whether that be oil paints, emulsion or acrylic paints applied by brush, roller or airbrush/aerosols. Clients will often ask for a particular style and the artist may adjust to the appropriate technique. A consultation usually leads to detailed design and layout of the proposed mural with a price quote that the client approves before the muralist starts on the work. The area to be painted can be gridded to match the design allowing the image to be scaled accurately step by step. In some cases, the design is projected straight onto the wall and traced with pencil before painting begins. Some muralists will paint directly without any prior sketching, preferring the spontaneous technique. Once completed the mural can be given coats of varnish or protective acrylic glaze to protect the work from UV rays and surface damage. In modern, quick form of muralling, young enthusiasts also use POP clay mixed with glue or bond to give desired models on canvas board. The canvas is later set aside to let the clay dry. Once dried, the canvas and the shape can be painted with your choice of colors and later coated with varnish. As an alternative to a hand-painted or airbrushed mural, digitally printed murals can also be applied to surfaces. Already existing murals can be photographed and then be reproduced in near-to-original quality. The disadvantages of pre-fabricated murals and decals are that they are often mass-produced and lack the allure and exclusivity of original artwork. They are often not fitted to the individual wall sizes of the client and their personal ideas or wishes cannot be added to the mural as it progresses. The Frescography technique, a digital manufacturing method (CAM) invented by Rainer Maria Latzke addresses some of the personalisation and size restrictions. Digital techniques are commonly used in advertisements. A "wallscape" is a large advertisement on or attached to the outside wall of a building. Wallscapes can be painted directly on the wall as a mural, or printed on vinyl and securely attached to the wall in the manner of a billboard. Although not strictly classed as murals, large scale printed media are often referred to as such. Advertising murals were traditionally painted onto buildings and shops by sign-writers, later as large scale poster billboards. Significance Murals are important in that they bring art into the public sphere. Due to the size, cost, and work involved in creating a mural, muralists must often be commissioned by a sponsor. Often it is the local government or a business, but many murals have been paid for with grants of patronage. For artists, their work gets a wide audience who otherwise might not set foot in an art gallery. A city benefits by the beauty of a work of art. Murals can be a relatively effective tool of social emancipation or achieving a political goal. Murals have sometimes been created against the law, or have been commissioned by local bars and coffee shops. Often, the visual effects are an enticement to attract public attention to social issues. State-sponsored public art expressions, particularly murals, are often used by totalitarian regimes as a tool of propaganda. However, despite the propagandist character of that works, some of them still have an artistic value. Murals can have a dramatic impact whether consciously or subconsciously on the attitudes of passers-by, when they are added to areas where people live and work. It can also be argued that the presence of large, public murals can add aesthetic improvement to the daily lives of residents or that of employees at a corporate venue. Large-format hand-painted murals were the norm for advertisements in cities across America, before the introduction of vinyl and digital posters. It was an expensive form of advertising with strict signage laws but gained attention and improved local aesthetics. Other world-famous murals can be found in Mexico, New York City, Philadelphia, Belfast, Derry, Los Angeles, Nicaragua, Cuba, the Philippines, and in India. They have functioned as an important means of communication for members of socially, ethnically and racially divided communities in times of conflict. They also proved to be an effective tool in establishing a dialogue and hence solving the cleavage in the long run. The Indian state Kerala has exclusive murals. These Kerala mural painting are on walls of Hindu temples. They can be dated from 9th century AD. The San Bartolo murals of the Maya civilization in Guatemala, are the oldest example of this art in Mesoamerica and are dated at 300 BC. Many rural towns have begun using murals to create tourist attractions in order to boost economic income. Colquitt, Georgia was chosen to host the 2010 Global Mural Conference. The town had more than twelve murals completed, and hosted the Conference along with Dothan, Alabama, and Blakely, Georgia. Politics The Mexican mural movement in the 1930s brought new prominence to murals as a social and political tool. Diego Rivera, José Orozco and David Siqueiros were the most famous artists of the movement. Between 1932 and 1940, Rivera also painted murals in San Francisco, Detroit, and New York City. In 1933, he completed a famous series of twenty-seven fresco | classed as murals, large scale printed media are often referred to as such. Advertising murals were traditionally painted onto buildings and shops by sign-writers, later as large scale poster billboards. Significance Murals are important in that they bring art into the public sphere. Due to the size, cost, and work involved in creating a mural, muralists must often be commissioned by a sponsor. Often it is the local government or a business, but many murals have been paid for with grants of patronage. For artists, their work gets a wide audience who otherwise might not set foot in an art gallery. A city benefits by the beauty of a work of art. Murals can be a relatively effective tool of social emancipation or achieving a political goal. Murals have sometimes been created against the law, or have been commissioned by local bars and coffee shops. Often, the visual effects are an enticement to attract public attention to social issues. State-sponsored public art expressions, particularly murals, are often used by totalitarian regimes as a tool of propaganda. However, despite the propagandist character of that works, some of them still have an artistic value. Murals can have a dramatic impact whether consciously or subconsciously on the attitudes of passers-by, when they are added to areas where people live and work. It can also be argued that the presence of large, public murals can add aesthetic improvement to the daily lives of residents or that of employees at a corporate venue. Large-format hand-painted murals were the norm for advertisements in cities across America, before the introduction of vinyl and digital posters. It was an expensive form of advertising with strict signage laws but gained attention and improved local aesthetics. Other world-famous murals can be found in Mexico, New York City, Philadelphia, Belfast, Derry, Los Angeles, Nicaragua, Cuba, the Philippines, and in India. They have functioned as an important means of communication for members of socially, ethnically and racially divided communities in times of conflict. They also proved to be an effective tool in establishing a dialogue and hence solving the cleavage in the long run. The Indian state Kerala has exclusive murals. These Kerala mural painting are on walls of Hindu temples. They can be dated from 9th century AD. The San Bartolo murals of the Maya civilization in Guatemala, are the oldest example of this art in Mesoamerica and are dated at 300 BC. Many rural towns have begun using murals to create tourist attractions in order to boost economic income. Colquitt, Georgia was chosen to host the 2010 Global Mural Conference. The town had more than twelve murals completed, and hosted the Conference along with Dothan, Alabama, and Blakely, Georgia. Politics The Mexican mural movement in the 1930s brought new prominence to murals as a social and political tool. Diego Rivera, José Orozco and David Siqueiros were the most famous artists of the movement. Between 1932 and 1940, Rivera also painted murals in San Francisco, Detroit, and New York City. In 1933, he completed a famous series of twenty-seven fresco panels entitled Detroit Industry on the walls of an inner court at the Detroit Institute of Arts. During the McCarthyism of the 1950s, a large sign was placed in the courtyard defending the artistic merit of the murals while attacking his politics as "detestable". In 1948, the Colombian government hosted the IX Pan-American Conference to establish the Marshall plan for the Americas. The director of the OEA and the Colombian government commissioned master Santiago Martinez Delgado, to paint a mural in the Colombian congress building to commemorate the event. Martinez decided to make it about the Cúcuta Congress, and painted Bolívar in front of Santander, making liberals upset; so, due to the murder of Jorge Elieser Gaitan the mobs of el bogotazo tried to burn the capitol, but the Colombian Army stopped them. Years later, in the 1980s, with liberals in charge of the Congress, they passed a resolution to turn the whole chamber in the Elliptic Room 90 degrees to put the main mural on the side and commissioned Alejandro Obregon to paint a non-partisan mural in the surrealist style. Northern Ireland contains some of the most famous political murals in the world. Almost 2,000 murals have been documented in Northern Ireland since the 1970s. In recent times, many murals are non-sectarian, concerning political and social issues such as racism and environmentalism, and many are completely apolitical, depicting children at play and scenes from everyday life. (See Northern Irish murals.) A not political, but social related mural covers a wall in an old building, once a prison, at the top of a cliff in Bardiyah, in Libya. It was painted and signed by the artist in April 1942, weeks before his death on the first day of the First Battle of El Alamein. Known as the Bardia Mural, it was created by English artist, private John Frederick Brill. In 1961 East Germany began to erect a wall between East and West Berlin, which became famous as the Berlin Wall. While on the East Berlin side painting was not allowed, artists painted on the Western side of the Wall from the 80s until the fall of the Wall in 1989. Many unknown and known artists such as Thierry |
or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power. For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred. The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply. Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged. That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply. Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply. For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium. On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market. Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing. Other applications of demand and supply include the distribution of income among the factors of production, including labour and capital, through factor markets. In a competitive labour market for example the quantity of labour employed and the price of labour (the wage rate) depends on the demand for labour (from employers for production) and supply of labour (from potential workers). Labour economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labour income, labour mobility, and (un)employment, productivity through human capital, and related public-policy issues. Demand-and-supply analysis is used to explain the behaviour of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources. Market structure Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems. Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning. Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits. The concept of "market type" is different from the concept of "market structure". Nevertheless, it is worth noting here that there are a variety of types of markets. The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm. Perfect competition Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market. Imperfect competition Imperfect competition is a type of market structure showing some but not all features of competitive markets. Monopolistic competition Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities. Monopoly A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies). Natural monopoly: A monopoly in an industry where one producer can produce output at a lower cost than many small producers. Oligopoly An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output. Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns. Duopoly: A special case of an oligopoly, with only two firms. Game theory can elucidate behavior in duopolies and oligopolies. Monopsony A monopsony is a market where there is only one buyer and many sellers. Bilateral monopoly A bilateral monopoly is a market consisting of both a monopoly (a single seller) and a monopsony (a single buyer). Oligopsony An oligopsony is a market where there are a few buyers and many sellers. Game theory Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term "game" here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy. Economics of information Information economics is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories. The economics of information has recently become of great interest to many - possibly due to the rise of information-based companies inside the technology industry. From a game theory approach, we can loosen the usual constraints that agents have complete information to | ch.2). Opportunity cost Opportunity cost is closely related to the idea of time constraints. One can do only one thing at a time, which means that, inevitably, one is always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing one may have done instead. Opportunity cost depends only on the value of the next-best alternative. It doesn't matter whether one has five alternatives or 5,000. Opportunity costs can tell when not to do something as well as when to do something. For example, one may like waffles, but like chocolate even more. If someone offers only waffles, one would take it. But if offered waffles or chocolate, one would take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if one chooses chocolate, they are still faced with the opportunity cost of giving up having waffles. But one is willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behaviour because one has to decide what's best and give up the next-best alternative. Price Theory Price theory is a field of economics that uses the supply and demand framework to explain and predict human behavior. It is associated with the Chicago School of Economics. Price theory studies competitive equilibrium in markets to yield testable hypotheses that can be rejected. Price theory is not the same as microeconomics. Strategic behavior, such as the interactions among sellers in a market where they are few, is a significant part of microeconomics but is not emphasized in price theory. Price theorists focus on competition believing it to be a reasonable description of most markets that leaves room to study additional aspects of tastes and technology. As a result, price theory tends to use less game theory than microeconomics does. Price theory focuses on how agents respond to prices, but its framework can be applied to a wide variety of socioeconomic issues that might not seem to involve prices at first glance. Price theorists have influenced several other fields including developing public choice theory and law and economics. Price theory has been applied to issues previously thought of as outside the purview of economics such as criminal justice, marriage, and addiction. Microeconomic models Supply and demand Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power. For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred. The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply. Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged. That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors of inputs of production are all taken to be constant for a specific time period of evaluation of supply. Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply. For a given quantity of a consumer good, the point on the demand curve indicates the value, or marginal utility, to consumers for that unit. It measures what the consumer would be prepared to pay for that unit. The corresponding point on the supply curve measures marginal cost, the increase in total cost to the supplier for the corresponding unit of the good. The price in equilibrium is determined by supply and demand. In a perfectly competitive market, supply and demand equate marginal cost and marginal utility at equilibrium. On the supply side of the market, some factors of production are described as (relatively) variable in the short run, which affects the cost of changing output levels. Their usage rates can be changed easily, such as electrical power, raw-material inputs, and over-time and temp work. Other inputs are relatively fixed, such as plant and equipment and key personnel. In the long run, all inputs may be adjusted by management. These distinctions translate to differences in the elasticity (responsiveness) of the supply curve in the short and long runs and corresponding differences in the price-quantity change from a shift on the supply or demand side of the market. Marginalist theory, such as above, describes the consumers as attempting to reach most-preferred positions, subject to income and wealth constraints while producers attempt to maximize profits subject to their own constraints, including demand for goods produced, technology, and the price of inputs. For the consumer, that point comes where marginal utility of a good, net of price, reaches zero, leaving no net gain from further consumption increases. Analogously, the producer compares marginal revenue (identical to price for the perfect competitor) against the marginal cost of a good, with marginal profit the difference. At the point where marginal profit reaches zero, further increases in production of the good stop. For movement to market equilibrium and for changes in equilibrium, price and quantity also change "at the margin": more-or-less of something, rather than necessarily all-or-nothing. Other applications of demand and supply include the distribution of income among the factors of production, including labour and capital, through factor markets. In a competitive labour market for example the quantity of labour employed and the price of labour (the wage rate) depends on the demand for labour (from employers for production) and supply of labour (from potential workers). Labour economics examines the interaction of workers and employers through such markets to explain patterns and changes of wages and other labour income, labour mobility, and (un)employment, productivity through human capital, and related public-policy issues. Demand-and-supply analysis is used to explain the behaviour of perfectly competitive markets, but as a standard of comparison it can be extended to any type of market. It can also be generalized to explain variables across the economy, for example, total output (estimated as real GDP) and the general price level, as studied in macroeconomics. Tracing the qualitative and quantitative effects of variables that change supply and demand, whether in the short or long run, is a standard exercise in applied economics. Economic theory may also specify conditions such that supply and demand through the market is an efficient mechanism for allocating resources. Market structure Market structure refers to features of a market, including the number of firms in the market, the distribution of market shares between them, product uniformity across firms, how easy it is for firms to enter and exit the market, and forms of competition in the market. A market structure can have several types of interacting market systems. Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning. Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. Regulations help to mitigate negative externalities of goods and services when the private equilibrium of the market does not match the social equilibrium. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits. The concept of "market type" is different from the concept of "market structure". Nevertheless, it is worth noting here that there are a variety of types of markets. The different market structures produce cost curves based on the type of structure present. The different curves are developed based on the costs of production, specifically the graph contains marginal cost, average total cost, average variable cost, average fixed cost, and marginal revenue, which is sometimes equal to the demand, average revenue, and price in a price-taking firm. Perfect competition Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are "price takers" (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market. Imperfect competition Imperfect competition is a type of market structure showing some but not all features of competitive markets. Monopolistic competition Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities. Monopoly A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition, they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs |
General Theory of Employment, Interest and Money written by John Maynard Keynes. When the Great Depression struck, classical economists had difficulty in explaining how goods could go unsold and workers could be left unemployed. In classical theory, prices and wages would drop until the market cleared, and all goods and labor were sold. Keynes offered a new theory of economics that explained why markets might not clear, which would evolve (later in the 20th century) into a group of macroeconomic schools of thought known as Keynesian economics – also called Keynesianism or Keynesian theory. In Keynes' theory, the quantity theory broke down because people and businesses tend to hold on to their cash in tough economic times – a phenomenon he described in terms of liquidity preferences. Keynes also explained how the multiplier effect would magnify a small decrease in consumption or investment and cause declines throughout the economy. Keynes also noted the role uncertainty and animal spirits can play in the economy. The generation following Keynes combined the macroeconomics of the General Theory with neoclassical microeconomics to create the neoclassical synthesis. By the 1950s, most economists had accepted the synthesis view of the macroeconomy. Economists like Paul Samuelson, Franco Modigliani, James Tobin, and Robert Solow developed formal Keynesian models and contributed formal theories of consumption, investment, and money demand that fleshed out the Keynesian framework. Monetarism Milton Friedman updated the quantity theory of money to include a role for money demand. He argued that the role of money in the economy was sufficient to explain the Great Depression, and that aggregate demand oriented explanations were not necessary. Friedman also argued that monetary policy was more effective than fiscal policy; however, Friedman doubted the government's ability to "fine-tune" the economy with monetary policy. He generally favored a policy of steady growth in money supply instead of frequent intervention. Friedman also challenged the Phillips curve relationship between inflation and unemployment. Friedman and Edmund Phelps (who was not a monetarist) proposed an "augmented" version of the Phillips curve that excluded the possibility of a stable, long-run tradeoff between inflation and unemployment. When the oil shocks of the 1970s created a high unemployment and high inflation, Friedman and Phelps were vindicated. Monetarism was particularly influential in the early 1980s. Monetarism fell out of favor when central banks found it difficult to target money supply instead of interest rates as monetarists recommended. Monetarism also became politically unpopular when the central banks created recessions in order to slow inflation. New classical New classical macroeconomics further challenged the Keynesian school. A central development in new classical thought came when Robert Lucas introduced rational expectations to macroeconomics. Prior to Lucas, economists had generally used adaptive expectations where agents were assumed to look at the recent past to make expectations about the future. Under rational expectations, agents are assumed to be more sophisticated. A consumer will not simply assume a 2% inflation rate just because that has been the average the past few years; they will look at current monetary policy and economic conditions to make an informed forecast. When new classical economists introduced rational expectations into their models, they showed that monetary policy could only have a limited impact. Lucas also made an influential critique of Keynesian empirical models. He argued that forecasting models based on empirical relationships would keep producing the same predictions even as the underlying model generating the data changed. He advocated models based on fundamental economic theory that would, in principle, be structurally accurate as economies changed. Following Lucas's critique, new classical economists, led by Edward C. Prescott and Finn E. Kydland, created real business cycle (RB C) models of the macro economy. RB C models were created by combining fundamental equations from neo-classical microeconomics. In order to generate macroeconomic fluctuations, RB C models explained recessions and unemployment with changes in technology instead of changes in the markets for goods or money. Critics of RB C models argue that money clearly plays an important role in the economy, and the idea that technological regress can explain recent recessions is implausible. However, technological shocks are only the more prominent of a myriad of possible shocks to the system that can be modeled. Despite questions about the theory behind RB C models, they have clearly been influential in economic methodology. New Keynesian response New Keynesian economists responded to the new classical school by adopting rational expectations and focusing on developing micro-founded models that are immune to the Lucas critique. Stanley Fischer and John B. Taylor produced early work in this area by showing that monetary policy could be effective even in models with rational expectations when contracts locked in wages for workers. Other new Keynesian economists, including Olivier Blanchard, Julio Rotemberg, Greg Mankiw, David Romer, and Michael Woodford, expanded on this work and demonstrated other cases where inflexible prices and wages led to monetary and fiscal policy having real effects. Like classical models, new classical models had assumed that prices would be able to adjust perfectly and monetary policy would only lead to price changes. New Keynesian models investigated sources of sticky prices and wages due to imperfect competition, which would not adjust, allowing monetary policy to impact quantities instead of prices. By the late 1990s, economists had reached a rough consensus. The nominal rigidity of new Keynesian theory was combined with rational expectations and the RBC methodology to produce dynamic stochastic general equilibrium (DSGE) models. The fusion of elements from different schools of thought has been dubbed the new neoclassical synthesis. These models are now used by many central banks and are a core part of contemporary macroeconomics. New Keynesian economics, which developed partly in response to new classical economics, strives to provide microeconomic foundations to Keynesian economics by showing how imperfect markets can justify demand management. Macroeconomic models Aggregate demand–aggregate supply The AD-AS model has become the standard textbook model for explaining the macroeconomy. This model shows the price level and level of real output given the equilibrium in aggregate demand and aggregate supply. The aggregate demand curve's downward slope means that more output is demanded at lower price levels. The downward slope is the result of three effects: the Pigou or real balance effect, which states that as real prices fall, real wealth increases, resulting in higher consumer demand of goods; the Keynes or interest rate effect, which states that as prices fall, the demand for money decreases, causing interest rates to decline and borrowing for investment and consumption to increase; and the net export effect, which states that as prices rise, domestic goods become comparatively more expensive to foreign consumers, leading to a decline in exports. In the conventional Keynesian use of the AS-AD model, the aggregate supply curve is horizontal at low levels of output and becomes inelastic near the point of potential output, which corresponds with full employment. Since the economy cannot produce beyond the potential output, any AD expansion will lead to higher price levels instead of higher output. The AD–AS diagram can model a variety of macroeconomic phenomena, including inflation. Changes in the non-price level factors or determinants cause changes in aggregate demand and shifts of the entire aggregate demand (AD) curve. When demand for goods exceeds supply, there is an inflationary gap where demand-pull inflation occurs and the AD curve shifts upward to a higher price | price level and level of real output given the equilibrium in aggregate demand and aggregate supply. The aggregate demand curve's downward slope means that more output is demanded at lower price levels. The downward slope is the result of three effects: the Pigou or real balance effect, which states that as real prices fall, real wealth increases, resulting in higher consumer demand of goods; the Keynes or interest rate effect, which states that as prices fall, the demand for money decreases, causing interest rates to decline and borrowing for investment and consumption to increase; and the net export effect, which states that as prices rise, domestic goods become comparatively more expensive to foreign consumers, leading to a decline in exports. In the conventional Keynesian use of the AS-AD model, the aggregate supply curve is horizontal at low levels of output and becomes inelastic near the point of potential output, which corresponds with full employment. Since the economy cannot produce beyond the potential output, any AD expansion will lead to higher price levels instead of higher output. The AD–AS diagram can model a variety of macroeconomic phenomena, including inflation. Changes in the non-price level factors or determinants cause changes in aggregate demand and shifts of the entire aggregate demand (AD) curve. When demand for goods exceeds supply, there is an inflationary gap where demand-pull inflation occurs and the AD curve shifts upward to a higher price level. When the economy faces higher costs, cost-push inflation occurs and the AS curve shifts upward to higher price levels. The AS–AD diagram is also widely used as an instructive tool to model the effects of various macroeconomic policies. IS-LM The IS–LM model gives the underpinnings of aggregate demand (itself discussed above). It answers the question "At any given price level, what is the quantity of goods demanded?". This model shows what combination of interest rates and output will ensure equilibrium in both the goods and money markets. The goods market is modeled as giving equality between investment and public and private saving (IS), and the money market is modeled as giving equilibrium between the money supply and liquidity preference. The IS curve consists of the points (combinations of income and interest rate) where investment, given the interest rate, is equal to public and private saving, given output The IS curve is downward sloping because output and the interest rate have an inverse relationship in the goods market: as output increases, more income is saved, which means interest rates must be lower to spur enough investment to match saving. The LM curve is upward sloping because the interest rate and output have a positive relationship in the money market: as income (identically equal to output) increases, the demand for money increases, resulting in a rise in the interest rate in order to just offset the incipient rise in money demand. The IS-LM model is often used to demonstrate the effects of monetary and fiscal policy. Textbooks frequently use the IS-LM model, but it does not feature the complexities of most modern macroeconomic models. Nevertheless, these models still feature similar relationships to those in IS-LM. Growth models The neoclassical growth model of Robert Solow has become a common textbook model for explaining economic growth in the long-run. The model begins with a production function where national output is the product of two inputs: capital and labor. The Solow model assumes that labor and capital are used at constant rates without the fluctuations in unemployment and capital utilization commonly seen in business cycles. An increase in output, or economic growth, can only occur because of an increase in the capital stock, a larger population, or technological advancements that lead to higher productivity (total factor productivity). An increase in the savings rate leads to a temporary increase as the economy creates more capital, which adds to output. However, eventually the depreciation rate will limit the expansion of capital: savings will be used up replacing depreciated capital, and no savings will remain to pay for an additional expansion in capital. Solow's model suggests that economic growth in terms of output per capita depends solely on technological advances that enhance productivity. In the 1980s and 1990s endogenous growth theory arose to challenge neoclassical growth theory. This group of models explains economic growth through other factors, such as increasing returns to scale for capital and learning-by-doing, that are endogenously determined instead of the exogenous technological improvement used to explain growth in Solow's model. Humanity's economic system as a subsystem of the global environment In the macroeconomic models in ecological economics, the economic system is a subsystem of the environment. In this model, the circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs. Some persistent pollutants, such as some organic pollutants and nuclear waste are absorbed very slowly or not at all; ecological economists emphasize minimizing "cumulative pollutants". Pollutants affect human health and the health of the ecosystem. Basic macroeconomic concepts Macroeconomics encompasses a variety of concepts and variables, but there are three central topics for macroeconomic research. Macroeconomic theories usually relate the phenomena of output, unemployment, and inflation. Outside of macroeconomic theory, these topics are also important to all economic agents including workers, consumers, and producers. Output and income National output is the total amount of everything a country produces in a given period of time. Everything that is produced and sold generates an equal amount of income. The total output of the economy is measured GDP per person. The output and income are usually considered equivalent and the two terms are often used interchangeably, output changes into income. Output can be measured or it can be viewed from the production side and measured as the total value of final goods and services or the sum of all value added in the economy. Macroeconomic output is usually measured by gross domestic product (GDP) or one of the other national accounts. Economists interested in long-run increases in output, study economic growth. Advances in technology, accumulation of machinery and other capital, and better education and human capital, are all factors that lead to increase economic output over time. However, output does not always increase consistently over time. Business cycles can cause short-term drops in output called recessions. Economists look for macroeconomic policies that prevent economies from slipping into recessions, and that lead to faster long-term growth. Unemployment The amount of unemployment in an economy is measured by the unemployment rate, i.e. the percentage of workers without jobs in the labor force. The unemployment rate in the labor force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded. Unemployment can be generally broken down into several types that are related to different causes. Classical unemployment theory suggests that unemployment occurs when wages are too high for employers to be willing to hire more workers. Other more modern economic theories suggest that increased wages actually |
epitomize the virtuous but fiery girl next door. Even at private parties, people instinctively stood up when Pickford entered a room; she and her husband were often referred to as "Hollywood royalty". Their international reputations were broad. Foreign heads of state and dignitaries who visited the White House often asked if they could also visit Pickfair, the couple's mansion in Beverly Hills. Dinners at Pickfair became celebrity events. Charlie Chaplin, Fairbanks' best friend, was often present. Other guests included George Bernard Shaw, Albert Einstein, Elinor Glyn, Helen Keller, H. G. Wells, Lord Mountbatten, Fritz Kreisler, Amelia Earhart, F. Scott Fitzgerald, Noël Coward, Max Reinhardt, Baron Nishi, Vladimir Nemirovich-Danchenko, Sir Arthur Conan Doyle, Austen Chamberlain, Sir Harry Lauder, and Meher Baba, among others. However, the public nature of Pickford's second marriage strained it to the breaking point. Both she and Fairbanks had little time off from producing and acting in their films. They were also constantly on display as America's unofficial ambassadors to the world, leading parades, cutting ribbons, and making speeches. When their film careers both began to flounder at the end of the silent era, Fairbanks' restless nature prompted him to overseas travel (something which Pickford did not enjoy). When Fairbanks' romance with Sylvia, Lady Ashley became public in the early 1930s, he and Pickford separated. They divorced January 10, 1936. Fairbanks' son by his first wife, Douglas Fairbanks Jr., claimed his father and Pickford long regretted their inability to reconcile. On June 24, 1937, Pickford married her third and last husband, actor and band leader Charles "Buddy" Rogers. They adopted two children: Roxanne (born 1944, adopted 1944) and Ronald Charles (born 1937, adopted 1943, a.k.a. Ronnie Pickford Rogers). A PBS American Experience documentary described Pickford's relationship with her children as tense. She criticized their physical imperfections, including Ronnie's small stature and Roxanne's crooked teeth. Both children later said their mother was too self-absorbed to provide real maternal love. In 2003, Ronnie recalled that "Things didn't work out that much, you know. But I'll never forget her. I think that she was a good woman." Political views Pickford supported Thomas Dewey in the 1944 United States presidential election, Barry Goldwater in the 1964 United States presidential election and Ronald Reagan in his race for governor in 1966. Later years and death After retiring from the screen, Pickford became an alcoholic, as her father had been. Her mother Charlotte died of breast cancer in March 1928. Her siblings, Lottie and Jack, both died of alcohol-related causes in 1936 and 1933, respectively. These deaths, her divorce from Fairbanks, and the end of silent films left Pickford deeply depressed. Her relationship with her adopted children, Roxanne and Ronald, was turbulent at best. Pickford withdrew and gradually became a recluse, remaining almost entirely at Pickfair and allowing visits only from Lillian Gish, her stepson Douglas Fairbanks Jr., and few select others. In 1955, she published her memoirs, Sunshine and Shadows. She had previously published Why Not Try God in 1934, an essay on spirituality and personal growth, My Rendevouz of Life (1935), an essay on death and her belief in an afterlife and also a novel in 1935, The Demi-Widow. She appeared in court in 1959, in a matter pertaining to her co-ownership of North Carolina TV station WSJS-TV. The court date coincided with the date of her 67th birthday; under oath, when asked to give her age, Pickford replied: "I'm 21, going on 20." In the mid-1960s, Pickford often received visitors only by telephone, speaking to them from her bedroom. Charles "Buddy" Rogers often gave guests tours of Pickfair, including views of a genuine western bar Pickford had bought for Douglas Fairbanks, and a portrait of Pickford in the drawing room. A print of this image now hangs in the Library of Congress. When Pickford received an Academy Honorary Award in 1976, the Academy sent a TV crew to her house to record her short statement of thanks – offering the public a very rare glimpse into Pickfair Manor. Charitable events continued to be held at Pickfair, including an annual Christmas party for blind war veterans, mostly from World War I. Pickford believed that she had ceased to be a British subject when she married an American citizen upon her marriage to Fairbanks in 1920. Thus, she never acquired Canadian citizenship when it was first created in 1947. However, Pickford held and traveled under a British/Canadian passport which she renewed regularly at the British/Canadian consulates in Los Angeles, and she did not take out papers for American citizenship. She also owned a house in Toronto, Ontario, Canada. Toward the end of her life, Pickford made arrangements with the Canadian Department of Citizenship to officially acquire Canadian citizenship because she wished to "die as a Canadian". Canadian authorities were not sure that she had ever lost her Canadian citizenship, given her passport status, but her request was approved and she officially became a Canadian citizen. On May 29, 1979, Pickford died at a Santa Monica, California, hospital of complications from a cerebral hemorrhage she had suffered the week before. She was interred in the Garden of Memory of the Forest Lawn Memorial Park cemetery in Glendale, California. Legacy Pickford was awarded a star in the category of motion pictures on the Hollywood Walk of Fame at 6280 Hollywood Blvd. Her handprints and footprints are displayed at Grauman's Chinese Theatre in Hollywood, California. She is represented in Hergé's Tintin in America. The Pickford Center for Motion Picture Study at 1313 Vine Street in Hollywood, constructed by the Academy of Motion Picture Arts and Sciences, opened in 1948 as a radio and television studio facility. The Mary Pickford Theater at the James Madison Memorial Building of the Library of Congress is named in her honor. The Mary Pickford Auditorium at Claremont McKenna College is named in her honor. In 1948, Mary Pickford built a seven-bedroom, eight-bathroom, estate on at the B Bar H Ranch, California where she lived and then later sold. A first-run movie theatre in Cathedral City, California is called The Mary Pickford Theatre, which was established on May 25, 2001. The theater is a grand one with several screens and is built in the shape of a Spanish Cathedral, complete with bell tower and three-story lobby. The lobby contains a historic display with original artifacts belonging to Pickford and Buddy Rogers, her last husband. Among them are a rare and spectacular beaded gown she wore in the film Dorothy Vernon of Haddon Hall (1924) designed by Mitchell Leisen, her special Oscar, and a jewelry box. The 1980 stage musical The Biograph Girl, about the silent film era, features the character of Pickford. In 2007, the Academy of Motion Picture Arts and Sciences sued the estate of the deceased Buddy Rogers' second wife, Beverly Rogers, in order to stop the public sale of one of Pickford's Oscars. A bust and historical plaque marks her birthplace in Toronto, now the site of the Hospital for Sick Children. The plaque was unveiled by her husband Buddy Rogers in 1973. The bust by artist Eino Gira was added ten years later. Her date of birth is stated on the plaque as April 8, 1893. This can only be assumed to be because her date of birth was never registered; throughout her life, beginning as a child, she led many people to believe that she was a year younger than her real age, so that she appeared to be more of an acting prodigy and continued to be cast in younger roles, which were more plentiful in the theatre. The family home had been demolished in 1943, and many of the bricks delivered to Pickford in California. Proceeds from the sale of the property were donated by Pickford to build a bungalow in East York, Ontario, which was then a Toronto suburb. The bungalow was the first prize in a lottery in Toronto to benefit war charities, and Pickford unveiled the home on May 26, 1943. In 1993, a Golden Palm Star on the Palm Springs Walk of Stars was dedicated to her. Pickford received a posthumous star on Canada's Walk of Fame in Toronto in 1999. Pickford was featured on a Canadian postage stamp in 2006. From January 2011 until July 2011, the Toronto International Film Festival exhibited a collection of Mary Pickford memorabilia in the Canadian Film Gallery of the TIFF Bell LightBox building. In February 2011, the Spadina Museum, dedicated to the 1920s and 1930s era in Toronto, staged performances of Sweetheart: The Mary Pickford Story, a one-woman musical based on the life and career of Pickford. In 2013, a copy of an early Pickford film that was thought to be lost (Their First Misunderstanding) was found by Peter Massie, a carpenter tearing down an abandoned barn in New Hampshire. It was donated to Keene State College and is currently undergoing restoration by the Library of Congress for exhibition. The film is notable as being the first in which Pickford was credited by name. On August 29, 2014, while presenting Behind The Scenes (1914) at Cinecon, film historian Jeffrey Vance announced he is working with the Mary Pickford Foundation on what will be her official biography. The Google Doodle of April 8, 2017 commemorated Mary Pickford's 125th birthday. The Girls in the Picture, a 2018 novel by Melanie Benjamin, is a historical fiction about the friendship of Mary Pickford and screenwriter Frances Marion. On August 20, 2019, the Toronto International Film Festival announced Mati Diop as the recipient of the first Mary Pickford Award. Filmography See also Timeline of Mary Pickford List of actors with Academy Award nominations Notes References Citations General sources Total pages: 680. Further reading Gladys goes to Hollywood at 100 Canadian Heroines: Famous and Forgotten Faces, by Merna Forster, via Google Books, pp. 204 sq. External links Mary Pickford at | and later introduced her to D. W. Griffith, who launched La Badie's career. In January 1910, Pickford traveled with a Biograph crew to Los Angeles. Many other film companies wintered on the West Coast, escaping the weak light and short days that hampered winter shooting in the East. Pickford added to her 1909 Biographs (Sweet and Twenty, They Would Elope, and To Save Her Soul, to name a few) with films made in California. Actors were not listed in the credits in Griffith's company. Audiences noticed and identified Pickford within weeks of her first film appearance. Exhibitors, in turn, capitalized on her popularity by advertising on sandwich boards that a film featuring "The Girl with the Golden Curls", "Blondilocks", or "The Biograph Girl" was inside. Pickford left Biograph in December 1910. The following year, she starred in films at Carl Laemmle's Independent Moving Pictures Company (IMP). IMP was absorbed into Universal Pictures in 1912, along with Majestic. Unhappy with their creative standards, Pickford returned to work with Griffith in 1912. Some of her best performances were in his films, such as Friends, The Mender of Nets, Just Like a Woman, and The Female of the Species. That year, Pickford also introduced Dorothy and Lillian Gish – whom she had befriended as new neighbors from Ohio – to Griffith, and each became major silent film stars, in comedy and tragedy, respectively. Pickford made her last Biograph picture, The New York Hat, in late 1912. She returned to Broadway in the David Belasco production of A Good Little Devil (1912). This was a major turning point in her career. Pickford, who had always hoped to conquer the Broadway stage, discovered how deeply she missed film acting. In 1913, she decided to work exclusively in film. The previous year, Adolph Zukor had formed Famous Players in Famous Plays. It was later known as Famous Players-Lasky and then Paramount Pictures, one of the first American feature film companies. Pickford left the stage to join Zukor's roster of stars. Zukor believed film's potential lay in recording theatrical players in replicas of their most famous stage roles and productions. Zukor first filmed Pickford in a silent version of A Good Little Devil. The film, produced in 1913, showed the play's Broadway actors reciting every line of dialogue, resulting in a stiff film that Pickford later called "one of the worst [features] I ever made ... it was deadly". Zukor agreed; he held the film back from distribution for a year. Pickford's work in material written for the camera by that time had attracted a strong following. Comedy-dramas, such as In the Bishop's Carriage (1913), Caprice (1913), and especially Hearts Adrift (1914), made her irresistible to moviegoers. Hearts Adrift was so popular that Pickford asked for the first of her many publicized pay raises based on the profits and reviews. The film marked the first time Pickford's name was featured above the title on movie marquees. Tess of the Storm Country was released five weeks later. Biographer Kevin Brownlow observed that the film "sent her career into orbit and made her the most popular actress in America, if not the world". Her appeal was summed up two years later by the February 1916 issue of Photoplay as "luminous tenderness in a steel band of gutter ferocity". Only Charlie Chaplin, who slightly surpassed Pickford's popularity in 1916, had a similarly spellbinding pull with critics and the audience. Each enjoyed a level of fame far exceeding that of other actors. Throughout the 1910s and 1920s, Pickford was believed to be the most famous woman in the world, or, as a silent-film journalist described her, "the best known woman who has ever lived, the woman who was known to more people and loved by more people than any other woman that has been in all history". Stardom Pickford starred in 52 features throughout her career. On June 24, 1916, Pickford signed a new contract with Zukor that granted her full authority over production of the films in which she starred, and a record-breaking salary of $10,000 a week. In addition, Pickford's compensation was half of a film's profits, with a guarantee of $1,040,000 (US$ in ), making her the first actress to sign a million dollar contract. She also became vice-president of Pickford Film Corporation. Occasionally, she played a child, in films such as The Poor Little Rich Girl (1917), Rebecca of Sunnybrook Farm (1917), Daddy-Long-Legs (1919) and Pollyanna (1920). Pickford's fans were devoted to these "little girl" roles, but they were not typical of her career. Due to her lack of a normal childhood, she enjoyed making these pictures. Given how small she was at under five feet, and her naturalistic acting abilities, she was very successful in these roles. Douglas Fairbanks Jr., when he first met her in person as a boy, assumed she was a new playmate for him, and asked her to come and play trains with him, which she obligingly did. In August 1918, Pickford's contract expired and, when refusing Zukor's terms for a renewal, she was offered $250,000 to leave the motion picture business. She declined, and went to First National Pictures, which agreed to her terms. In 1919, Pickford, along with D. W. Griffith, Charlie Chaplin, and Douglas Fairbanks, formed the independent film production company United Artists. Through United Artists, Pickford continued to produce and perform in her own movies; she could also distribute them as she chose. In 1920, Pickford's film Pollyanna grossed around $1,100,000. The following year, Pickford's film Little Lord Fauntleroy was also a success, and in 1923, Rosita grossed over $1,000,000 as well. During this period, she also made Little Annie Rooney (1925), another film in which Pickford played a child, Sparrows (1926), which blended the Dickensian with newly minted German expressionist style, and My Best Girl (1927), a romantic comedy featuring her future husband Charles "Buddy" Rogers. The arrival of sound was her undoing. Pickford underestimated the value of adding sound to movies, claiming that "adding sound to movies would be like putting lipstick on the Venus de Milo". She played a reckless socialite in Coquette (1929), her first talkie, a role for which her famous ringlets were cut into a 1920s' bob. Pickford had already cut her hair in the wake of her mother's death in 1928. Fans were shocked at the transformation. Pickford's hair had become a symbol of female virtue, and when she cut it, the act made front-page news in The New York Times and other papers. Coquette was a success and won her an Academy Award for Best Actress, although this was highly controversial. The public failed to respond to her in the more sophisticated roles. Like most movie stars of the silent era, Pickford found her career fading as talkies became more popular among audiences. Her next film, The Taming of The Shrew, made with husband Douglas Fairbanks, was not well received at the box office. Established Hollywood actors were panicked by the impending arrival of the talkies. On March 29, 1928, The Dodge Brothers Hour was broadcast from Pickford's bungalow, featuring Fairbanks, Chaplin, Norma Talmadge, Gloria Swanson, John Barrymore, D. W. Griffith, and Dolores del Río, among others. They spoke on the radio show to prove that they could meet the challenge of talking movies. A transition in the roles Pickford selected came when she was in her late 30s, no longer able to play the children, teenage spitfires, and feisty young women so adored by her fans, and was not suited for the glamorous and vampish heroines of early sound. In 1933, she underwent a Technicolor screen test for an animated/live action film version of Alice in Wonderland, but Walt Disney discarded the project when Paramount released its own version of the book. Only one Technicolor still of her screen test still exists. She retired from film acting in 1933 following three costly failures with her last film appearance being Secrets. She appeared on stage in Chicago in 1934 in the play The Church Mouse and went on tour in 1935, starting in Seattle with the stage version of Coquette. She also appeared in a season of radio plays for NBC in 1935 and CBS in 1936. In 1936 she became vice-president of United Artists and continued to produce films for others, including One Rainy Afternoon (1936), The Gay Desperado (1936), Sleep, My Love (1948; with Claudette Colbert) and Love Happy (1949), with the Marx Brothers. The film industry Pickford used her stature in the movie industry to promote a variety of causes. Although her image depicted fragility and innocence, she proved to be a strong businesswoman who took control of her career in a cutthroat industry. During World War I she promoted the sale of Liberty Bonds, making an intensive series of fund-raising speeches, beginning in Washington, D.C., where she sold bonds alongside Charlie Chaplin, Douglas Fairbanks, Theda Bara, and Marie Dressler. Five days later she spoke on Wall Street to an estimated 50,000 people. Though Canadian-born, she was a powerful symbol of American culture, kissing the American flag for cameras and auctioning one of her world-famous curls for $15,000. In a single speech in Chicago, she sold an estimated five million dollars' worth of bonds. She was christened the U.S. Navy's official "Little Sister"; the Army named two cannons after her and made her an honorary colonel. In 1916, Pickford and Constance Adams DeMille, wife of director Cecil B. DeMille, helped found the Hollywood Studio Club, a dormitory for young women involved in the motion picture business. At the end of World War I, Pickford conceived of the Motion Picture Relief Fund, an organization to help financially needy actors. Leftover funds from her work selling Liberty Bonds were put toward its creation, and in 1921, the Motion Picture Relief Fund (MPRF) was officially incorporated, with Joseph Schenck voted its first president and Pickford its vice president. In 1932, Pickford spearheaded the "Payroll Pledge Program", a payroll-deduction plan for studio workers who gave one half of one percent of their earnings to the MPRF. As a result, in 1940, the Fund was able to purchase land and build the Motion Picture Country House and Hospital, in Woodland Hills, California. An astute businesswoman, Pickford became her own producer within three years of her start in features. According to her Foundation, "she oversaw every aspect of the making of her films, from hiring talent and crew to overseeing the script, the shooting, the editing, to the final release and promotion of each project". She demanded (and received) these powers in 1916, when she was under contract to Zukor's Famous Players in Famous Plays (later Paramount). Zukor acquiesced to her refusal to participate in block-booking, the widespread practice of forcing an exhibitor to show a bad film of the studio's choosing to also be able to show a Pickford film. In 1916, Pickford's films were distributed, singly, through a special distribution unit called Artcraft. The Mary Pickford Corporation was briefly Pickford's motion-picture production company. In 1919, she increased her power by co-founding United Artists (UA) with Charlie Chaplin, D. W. Griffith, and her soon-to-be husband, Douglas Fairbanks. Before UA's creation, Hollywood studios were vertically integrated, not only producing films but forming chains of theaters. Distributors (also part of the studios) arranged for company productions to be shown in the company's movie venues. Filmmakers relied on the studios for bookings; in return they put up with what many considered creative interference. United Artists broke from this tradition. It was solely a distribution company, offering independent film producers access to its own screens as well as the rental of temporarily unbooked cinemas owned by other companies. Pickford and Fairbanks produced and shot their films after 1920 at the jointly owned Pickford-Fairbanks studio on Santa Monica Boulevard. The producers who signed with UA were true independents, producing, creating and controlling their work to an unprecedented degree. As a co-founder, as well as the producer and star of her own films, Pickford became the most powerful woman who has ever worked in Hollywood. By 1930, Pickford's acting career had largely faded. After retiring three years later, however, she continued to produce films for United Artists. She and Chaplin remained partners in the company for decades. Chaplin left the company in 1955, and Pickford followed suit in 1956, selling her remaining shares for $3 million. She had bought the rights to many of her early silent films with the intention of burning them on her death, but in 1970 she agreed to donate 50 of her Biograph films to the American Film Institute. In 1976, she received an Academy Honorary Award for her contribution to American film. Personal life Pickford was married three times. She married Owen Moore, an Irish-born silent film actor, on January 7, 1911. It is rumored she became pregnant by Moore in the early 1910s and had a miscarriage or an abortion. Some accounts suggest this resulted in her later inability to have children. The couple's marriage was strained by Moore's alcoholism, insecurity about living in the shadow of Pickford's fame, and bouts of domestic violence. The couple lived together on-and-off for several years. Pickford became secretly involved in a relationship with Douglas Fairbanks. They toured the U.S. together in 1918 to promote Liberty Bond sales for the World War I effort. Around this time, Pickford also suffered from the flu during the 1918 flu pandemic. Pickford divorced Moore on March 2, 1920, after she agreed to his $100,000 demand for a settlement ($1.4 million in 2021, adjusted |
married in 1879 in Tingwick, Quebec and moved the same year to Richmond, Quebec where Sinnott was hired as a laborer. By 1883, when Sennett's brother George was born, Sinnott was working as an innkeeper, a position he held for many years. Sennett's parents had all their children and raised their family in Richmond, then a small Eastern Townships village. At that time, Sennett's grandparents were living in Danville, Quebec. Sennett moved to Connecticut when he was 17 years old. He lived for a while in Northampton, Massachusetts, where, according to his autobiography, he first got the idea to become an opera singer after seeing a vaudeville show. He said that the most respected lawyer in town, Northampton mayor (and future President of the United States) Calvin Coolidge, as well as Sennett's mother, tried to talk him out of his musical ambitions. In New York City, he took on the stage name Mack Sennett and became an actor, singer, dancer, clown, set designer, and director for the Biograph Company. A distinction in his acting career, often overlooked, is that he played Sherlock Holmes 11 times, albeit as a parody, between 1911 and 1913. Keystone Studios With financial backing from Adam Kessel and Charles O. Bauman of the New York Motion Picture Company, Sennett founded Keystone Studios in Edendale, California – now a part of Echo Park – in 1912. The original main building which was the first totally enclosed film stage and studio ever constructed, is still standing. Many successful actors began their film careers with Sennett, including Marie Dressler, Mabel Normand, Charles Chaplin, Harry Langdon, Roscoe Arbuckle, Harold Lloyd, Raymond Griffith, Gloria Swanson, Ford Sterling, Andy Clyde, Chester Conklin, Polly Moran, Louise Fazenda, The Keystone Cops, Bing Crosby, and W. C. Fields. Dubbed the King of Hollywood's Fun Factory, Sennett's studios produced slapstick comedies that were noted for their hair-raising car chases and custard pie warfare, especially in the Keystone Cops series. The comic formulas, however well executed, were based on humorous situations rather than the personal traits of the comedian. The various social types, often grotesquely portrayed by members of Sennett's troupe, were adequate to render the largely “interchangeable routines: “Having a funny mustache, or crossed-eyes, or an extra two-hundred pounds was as much individualization as was required.” Film historian Richard Koszarski qualifies "fun factory" influence on comedic film acting: Sennett's first female comedian was Mabel Normand, who became a major star under his direction and with whom he embarked on a tumultuous romantic relationship. Sennett also developed the Kid Comedies, a forerunner of the Our Gang films, and in a short time, his name became synonymous with screen comedy which were called "flickers" at the time. In 1915, Keystone Studios became an autonomous production unit of the ambitious Triangle Film Corporation, as Sennett joined forces with D. W. Griffith and Thomas Ince, both powerful figures in the film industry. Sennett Bathing Beauties Also beginning in 1915, Sennett assembled a bevy of women known as the Sennett Bathing Beauties to appear in provocative bathing costumes in comedy short subjects, in promotional material, and in promotional events such as Venice Beach beauty contests. The Sennett Bathing Beauties continued to appear through 1928. Independent production In 1917, Sennett gave up the Keystone trademark and organized his own company, Mack Sennett Comedies Corporation. Sennett's bosses retained the Keystone trademark and produced a cheap series of comedy shorts that were "Keystones" in name only: they were unsuccessful, and Sennett had no connection with them. Sennett went on to produce more ambitious comedy short films and a few feature-length films. During the 1920s his short subjects were in much demand; they featured stars such as Louise Fazenda, Billy Bevan, Andy Clyde, Harry Gribbon, Vernon Dent, Alice Day, | did not survive the Great Depression. His partnership with Paramount lasted only one year and he was forced into bankruptcy in November 1933. On January 12, 1934, Sennett was injured in an automobile accident that killed blackface performer Charles Mack in Mesa, Arizona. His last work, in 1935, was as a producer-director for Educational Pictures, in which he directed Buster Keaton in The Timid Young Man and Joan Davis in Way Up Thar. (The 1935 Vitaphone short subject Keystone Hotel is not a Sennett production, although it featured several alumni from the Mack Sennett Studios. Actually, Sennett was not involved in the making of this film.) Mack Sennett went into semiretirement at the age of 55, having produced more than 1,000 silent films and several dozen talkies during a 25-year career. His studio property was purchased by Mascot Pictures (later part of Republic Pictures), and many of his former staffers found work at Columbia Pictures. In March 1938, Sennett was presented with an honorary Academy Award: "for his lasting contribution to the comedy technique of the screen, the basic principles of which are as important today as when they were first put into practice, the Academy presents a Special Award to that master of fun, discoverer of stars, sympathetic, kindly, understanding comedy genius – Mack Sennett." Later projects Rumors abounded that Sennett would be returning to film production (a 1938 publicity release indicated that he would be working with Stan Laurel of Laurel and Hardy), but apart from Sennett reissuing a couple of his Bing Crosby two-reelers to theaters, nothing happened. Sennett did appear in front of the camera, however, in Hollywood Cavalcade (1939), itself a thinly disguised version of the Mack Sennett-Mabel Normand romance. In 1949, he provided film footage for and also appeared in the first full-length comedy compilation called Down Memory Lane (1949), which was written and narrated by Steve Allen. Sennett was profiled in the television series This is Your Life in 1954, and made a cameo appearance (for $1,000) in Abbott and Costello Meet the Keystone Kops (1955). His last contribution worth noting was to the NBC radio program Biography in Sound relating memories of working with W.C. Fields, which was broadcast February 28, 1956. Death Sennett died on November 5, 1960, in Woodland Hills, California, aged 80. He was interred in the Holy Cross Cemetery in Culver City, California. Filmography Tributes For his contribution to the motion picture industry, Sennett was honored with a star on the Hollywood Walk of Fame at 6712 Hollywood Boulevard. He was also inducted into Canada's Walk of Fame in 2014. In popular culture In A Story of Water, a 1961 short film by Jean-Luc Godard and François Truffaut, the directors dedicated the film to Mack Sennett. In 1974, Michael Stewart and Jerry Herman wrote the musical Mack & Mabel, chronicling the romance between Sennett and Mabel Normand. Sennett also was a leading character in The Biograph Girl, a 1980 musical about the silent film era. Peter Lovesey's 1983 novel Keystone is a whodunnit set in the Keystone Studios and involving (among others), Mack Sennett, Mabel Normand, Roscoe Arbuckle, and the Keystone Cops. Dan Aykroyd portrayed Mack Sennett in the 1992 movie Chaplin. Marisa Tomei played Mabel Normand and Robert Downey Jr. starred as Charlie Chaplin. Joseph Beattie and Andrea Deck portrayed Mack Sennett and Mabel Normand, respectively, in episode |
However, the MPPC also established a monopoly on all aspects of filmmaking. Eastman Kodak, which owned the patent on raw film stock, was a member of the trust and thus agreed to sell stock only to other members. Likewise, the trust's control of patents on motion picture cameras ensured that only MPPC studios were able to film, and the projector patents allowed the trust to make licensing agreements with distributors and theaters – and thus determine who screened their films and where. The patents owned by the MPPC allowed them to use federal law enforcement officials to enforce their licensing agreements and to prevent unauthorized use of their cameras, films, projectors, and other equipment. In some cases, however, the MPPC made use of hired thugs and mob connections to violently disrupt productions that were not licensed by the trust. Content The MPPC also strictly regulated the production content of their films, primarily as a means of cost control. Films were initially limited to one reel in length (13–17 minutes), although competition by independent and foreign producers by 1912 led to the introduction of two-reelers, and by 1913, three- and four-reelers. Backlash and decline Many independent filmmakers, who controlled from one-quarter to one-third of the domestic marketplace, responded to the creation of the MPPC by moving their operations to Hollywood, whose distance from Edison's home base of New Jersey made it more difficult for the MPPC to enforce its patents. The Ninth Circuit Court of Appeals, which is headquartered in San Francisco, California, and covers the area, was averse to enforcing patent claims. Southern California was also chosen because of its beautiful year-round weather and varied countryside; its topography, semi-arid climate and widespread irrigation gave its landscapes the ability to offer motion picture shooting scenes set in deserts, jungles and great mountains. Hollywood had one additional advantage: if a non-licensed studio was sued, it was only a hundred miles to "run for the border" and get out of the US to Mexico, where the trust's patents were not in | distributor (George Kleine) and the biggest supplier of raw film stock, Eastman Kodak. The MPPC ended the domination of foreign films on US screens, standardized the manner in which films were distributed and exhibited within the US, and improved the quality of US motion pictures by internal competition. But it also discouraged its members' entry into feature film production, and the use of outside financing, both to its members' eventual detriment. Creation The MPPC was preceded by the Edison licensing system, in effect in 1907–1908, on which the MPPC was modeled. During the 1890s, Thomas Edison owned most of the major US patents relating to motion picture cameras. The Edison Manufacturing Company's patent lawsuits against each of its domestic competitors crippled the US film industry, reducing production mainly to two companies: Edison and Biograph, which used a different camera design. This left Edison's other rivals with little recourse but to import French and British films. Since 1902, Edison had also been notifying distributors and exhibitors that if they did not use Edison machines and films exclusively, they would be subject to litigation for supporting filmmaking that infringed Edison's patents. Exhausted by the lawsuits, Edison's competitors — Essanay, Kalem, Pathé Frères, Selig, and Vitagraph — approached him in 1907 to negotiate a licensing agreement, which Lubin was also invited to join. The one notable filmmaker excluded from the licensing agreement was Biograph, which Edison hoped to squeeze out of the market. No further applicants could become licensees. The purpose of the licensing agreement, according to an Edison lawyer, was to "preserve the business of present manufacturers and not to throw the field open to all competitors." In February 1909, major European producers held the Paris Film Congress in an attempt to create a similar European organisation. This group also included MPPC members Pathé and Vitagraph, which had extensive European production and distribution interests. This proposed European cartel ultimately failed when Pathé, then still the largest company in the world, withdrew in April. The addition of Biograph Biograph retaliated for being frozen out of the trust agreement by purchasing the patent to the Latham film loop, a key feature of virtually all motion picture cameras then in use. Edison sued to gain control of the patent; however, after a federal court upheld the validity of the patent in 1907, Edison began negotiation with Biograph in May 1908 to reorganize the Edison licensing system. The resulting trust pooled 16 motion picture patents. Ten were considered of minor importance; the remaining key six pertained one each to films, cameras, and the Latham loop, and three to projectors. Policies The MPPC eliminated the outright sale of films to distributors and exhibitors, replacing |
be changed freely by the collision-finding algorithm. An example MD5 collision, with the two messages differing in 6 bits, is: d131dd02c5e6eec4 693d9a0698aff95c 2fcab5712467eab 4004583eb8fb7f89 55ad340609f4b302 83e48883251415a 085125e8f7cdc99f d91dbd280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b6708080d1e c69821bcb6a88393 96f965b6ff72a70 d131dd02c5e6eec4 693d9a0698aff95c 2fcab5712467eab 4004583eb8fb7f89 55ad340609f4b302 83e48883251415a 085125e8f7cdc99f d91dbd280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b6708080d1e c69821bcb6a88393 96f965b6ff72a70 Both produce the MD5 hash 79054025255fb1a26e4bc422aef54eb4. The difference between the two samples is that the leading bit in each nibble has been flipped. For example, the 20th byte (offset 0x13) in the top sample, 0x87, is 10000111 in binary. The leading bit in the byte (also the leading bit in the first nibble) is flipped to make 00000111, which is 0x07, as shown in the lower sample. Later it was also found to be possible to construct collisions between two files with separately chosen prefixes. This technique was used in the creation of the rogue CA certificate in 2008. A new variant of parallelized collision searching using MPI was proposed by Anton Kuznetsov in 2014, which allowed finding a collision in 11 hours on a computing cluster. Preimage vulnerability In April 2009, an attack against MD5 was published that breaks MD5's preimage resistance. This attack is only theoretical, with a computational complexity of 2123.4 for full preimage. Applications MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 (known as md5sum) checksum for the files, so that a user can compare the checksum of the downloaded file to it. Most unix-based operating systems include MD5 sum utilities in their distribution packages; Windows users may use the included PowerShell function "Get-FileHash", install a Microsoft utility, or use third-party applications. Android ROMs also use this type of checksum. As it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. In some cases, the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files. Historically, MD5 has been used to store a one-way hash of a password, often with key stretching. NIST does not include MD5 in their list of recommended hashes for password storage. MD5 is also used in the field of electronic discovery, to provide a unique identifier for each document that is exchanged during the legal discovery process. This method can be used to replace the Bates stamp numbering system that has been used for decades during the exchange of paper documents. As above, this usage should be discouraged due to the ease of collision attacks. Algorithm MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit words); the message is padded so that its length is divisible by 512. The padding works as follows: first, a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264. The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted , , , and . These are initialized to certain fixed constants. The main algorithm then uses each 512-bit message block in turn to modify the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function , modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions; a different one is used in each round: denote the XOR, AND, OR and NOT operations respectively. Pseudocode The MD5 hash is calculated according to this algorithm. All values are in little-endian. // : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating var int s[64], K[64] var int i // s specifies the per-round shift amounts s[ 0..15] := { 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22 } s[16..31] := { 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20 } s[32..47] := { 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23 } s[48..63] := { 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21 } // Use binary integer part of the sines of integers (Radians) as constants: for i from 0 to 63 do K[i] := floor(232 × abs (sin(i + 1))) end for // (Or just use the following precomputed table): K[ 0.. 3] := { 0xd76aa478, 0xe8c7b756, 0x242070db, 0xc1bdceee } K[ 4.. 7] := { 0xf57c0faf, 0x4787c62a, 0xa8304613, 0xfd469501 } K[ 8..11] := { 0x698098d8, 0x8b44f7af, 0xffff5bb1, 0x895cd7be } K[12..15] := { 0x6b901122, 0xfd987193, 0xa679438e, 0x49b40821 } K[16..19] := { 0xf61e2562, 0xc040b340, 0x265e5a51, 0xe9b6c7aa } K[20..23] := { 0xd62f105d, 0x02441453, 0xd8a1e681, 0xe7d3fbc8 } K[24..27] := { 0x21e1cde6, 0xc33707d6, 0xf4d50d87, 0x455a14ed } K[28..31] := { 0xa9e3e905, 0xfcefa3f8, 0x676f02d9, 0x8d2a4c8a } K[32..35] := { 0xfffa3942, 0x8771f681, 0x6d9d6122, 0xfde5380c } K[36..39] := { 0xa4beea44, 0x4bdecfa9, 0xf6bb4b60, 0xbebfbc70 } K[40..43] := { 0x289b7ec6, 0xeaa127fa, 0xd4ef3085, 0x04881d05 } K[44..47] := { 0xd9d4d039, 0xe6db99e5, 0x1fa27cf8, 0xc4ac5665 } K[48..51] := { 0xf4292244, 0x432aff97, 0xab9423a7, 0xfc93a039 } K[52..55] := { 0x655b59c3, 0x8f0ccc92, 0xffeff47d, 0x85845dd1 } K[56..59] := { 0x6fa87e4f, 0xfe2ce6e0, 0xa3014314, 0x4e0811a1 } K[60..63] := { 0xf7537e82, 0xbd3af235, 0x2ad7d2bb, 0xeb86d391 } // Initialize variables: var int a0 := 0x67452301 // A var int b0 := 0xefcdab89 // B var int c0 := 0x98badcfe // C var int d0 := 0x10325476 // D // Pre-processing: adding a single 1 bit append "1" bit to message // Notice: the input bytes are considered as bits strings, // where the first bit is the most significant bit of the byte. // Pre-processing: padding with zeros append "0" bit until message length in bits ≡ 448 (mod 512) // Notice: the two padding steps above are implemented in a simpler way | d8823e3156348f5b ae6dacd436c919c6 dd53e2487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b6708080d1e c69821bcb6a88393 96f965b6ff72a70 d131dd02c5e6eec4 693d9a0698aff95c 2fcab5712467eab 4004583eb8fb7f89 55ad340609f4b302 83e48883251415a 085125e8f7cdc99f d91dbd280373c5b d8823e3156348f5b ae6dacd436c919c6 dd53e2487da03fd 02396306d248cda0 e99f33420f577ee8 ce54b6708080d1e c69821bcb6a88393 96f965b6ff72a70 Both produce the MD5 hash 79054025255fb1a26e4bc422aef54eb4. The difference between the two samples is that the leading bit in each nibble has been flipped. For example, the 20th byte (offset 0x13) in the top sample, 0x87, is 10000111 in binary. The leading bit in the byte (also the leading bit in the first nibble) is flipped to make 00000111, which is 0x07, as shown in the lower sample. Later it was also found to be possible to construct collisions between two files with separately chosen prefixes. This technique was used in the creation of the rogue CA certificate in 2008. A new variant of parallelized collision searching using MPI was proposed by Anton Kuznetsov in 2014, which allowed finding a collision in 11 hours on a computing cluster. Preimage vulnerability In April 2009, an attack against MD5 was published that breaks MD5's preimage resistance. This attack is only theoretical, with a computational complexity of 2123.4 for full preimage. Applications MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. For example, file servers often provide a pre-computed MD5 (known as md5sum) checksum for the files, so that a user can compare the checksum of the downloaded file to it. Most unix-based operating systems include MD5 sum utilities in their distribution packages; Windows users may use the included PowerShell function "Get-FileHash", install a Microsoft utility, or use third-party applications. Android ROMs also use this type of checksum. As it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. In some cases, the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files. Historically, MD5 has been used to store a one-way hash of a password, often with key stretching. NIST does not include MD5 in their list of recommended hashes for password storage. MD5 is also used in the field of electronic discovery, to provide a unique identifier for each document that is exchanged during the legal discovery process. This method can be used to replace the Bates stamp numbering system that has been used for decades during the exchange of paper documents. As above, this usage should be discouraged due to the ease of collision attacks. Algorithm MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit words); the message is padded so that its length is divisible by 512. The padding works as follows: first, a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264. The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted , , , and . These are initialized to certain fixed constants. The main algorithm then uses each 512-bit message block in turn to modify the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function , modular addition, and left rotation. Figure 1 illustrates one operation within a round. There are four possible functions; a different one is used in each round: denote the XOR, AND, OR and NOT operations respectively. Pseudocode The MD5 hash is calculated according to this algorithm. All values are in little-endian. // : All variables are unsigned 32 bit and wrap modulo 2^32 when calculating var int s[64], K[64] var int i // s specifies the per-round shift amounts s[ 0..15] := { 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22 } s[16..31] := { 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20 } s[32..47] := { 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23 } s[48..63] := { 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21 } // Use binary integer part of the sines of integers (Radians) as constants: for i from 0 to 63 do K[i] := floor(232 × abs (sin(i + 1))) end for // (Or just use the following precomputed table): K[ 0.. 3] := { 0xd76aa478, 0xe8c7b756, 0x242070db, 0xc1bdceee } K[ 4.. 7] := { 0xf57c0faf, 0x4787c62a, 0xa8304613, 0xfd469501 } K[ 8..11] := { 0x698098d8, 0x8b44f7af, 0xffff5bb1, 0x895cd7be } K[12..15] := { 0x6b901122, 0xfd987193, 0xa679438e, 0x49b40821 } K[16..19] := { 0xf61e2562, 0xc040b340, 0x265e5a51, 0xe9b6c7aa } K[20..23] := { 0xd62f105d, 0x02441453, 0xd8a1e681, 0xe7d3fbc8 } K[24..27] := { 0x21e1cde6, 0xc33707d6, 0xf4d50d87, 0x455a14ed } K[28..31] := { 0xa9e3e905, 0xfcefa3f8, 0x676f02d9, 0x8d2a4c8a } K[32..35] := { 0xfffa3942, |
It can be played in various rule formats, which fall into two categories: constructed and limited. Limited formats involve players building a deck spontaneously out of a pool of random cards with a minimum deck size of 40 cards; in constructed formats, players create decks from cards they own, usually with a minimum of 60 cards per deck. New cards are released on a regular basis through expansion sets. An organized tournament system (the WPN) played at the international level and a worldwide community of professional Magic players has developed, as well as a substantial resale market for Magic cards. Certain cards can be valuable due to their rarity in production and utility in gameplay, with prices ranging from a few cents to tens of thousands of dollars. Gameplay A standard game of Magic involves two or more players who are engaged in a battle acting as powerful wizards, known as Planeswalkers. Each player has their own deck of cards, either one previously constructed or made from a limited pool of cards for the event. A player starts the game with a "life total" of twenty and loses the game when their life total is reduced to zero. A player can also lose if they must draw from their deck when no cards are left. In addition, some cards specify other ways to win or lose the game. Cards in Magic: The Gathering have a consistent format, with half of the face of the card showing the card's art, and the other half listing the card's mechanics, often relying on commonly-reused keywords to simplify the card's text. Cards fall into generally two classes: lands and spells. Lands provide mana, or magical energy, which is used as magical fuel when the player attempts to cast spells. Players can only play one land card per turn, with most land providing a specific color of mana when they are "tapped" (usually by rotating the card 90 degrees to show it has been used that turn), with each land only able to be tapped for mana once per turn. Spells consume mana, typically with at least one or more mana of a specific color. More powerful spells cost more mana, so as the game progresses, more land will be in play, more mana will be available, and the quantity and relative power of the spells played tends to increase. Spells come in several varieties: non-permanents like "sorceries" and "instants" have a single, one-time effect before they go to the "graveyard" (discard pile); "enchantments" and "artifacts" that remain in play after being cast to provide a lasting magical effect; and "creature" spells summon creatures that can attack and damage an opponent as well as used to defend from the opponent's creature attacks. Land, enchantments, artifacts, and creature cards are considered permanents as they remain in play until removed by other spells, ability, or combat effects. The set Lorwyn introduced the new "planeswalker" card type, which represents powerful allies who fight with their own magic abilities. Players begin the game by shuffling their decks and then drawing seven cards. On each player's turn, following a set phase order, they draw a card, tap their lands and other permanents as necessary to gain mana as to cast spells, engage their creatures in a single attack round against their opponent who may use their own creatures to block the attack, and then complete other actions with any remaining mana. Tapped resources remain tapped until the start of the player's next turn, which may leave them without land to draw for mana to cast spells in reaction to their opponent, or without creatures to block attacks, so the player must also plan ahead for their opponent's turn. Most actions that a player can perform enter the "Stack", a concept similar to the stack in computer programming, as either player can react to these actions with other actions, such as counter-spells; the stack provides a method of resolving complex interactions that may result in certain scenarios. Deck construction Deck building requires strategy as players must choose among thousands of cards which they want to play. This requires players to evaluate the power of their cards, as well as the possible synergies between them, and their possible interactions with the cards they expect to play against (this "metagame" can vary in different locations or time periods). The choice of cards is usually narrowed by the player deciding which colors they want to include in the deck. This decision is a key part of creating a deck. In general, reducing the number of colors used increases the consistency of play and the probability of drawing the lands needed to cast one's spells, at the expense of restricting the range of tactics available to the player. Part of the Magic product line has been starter decks which are aimed to provide novice players with ideas for deck building. Players expand their card library for deck building through booster packs, which have a random distribution of cards from a specific Magic set and defined by rarity. These rarities are known as Common, Uncommon, Rare, and Mythic, with generally more powerful cards having higher rarities. Most sanctioned games for Magic: The Gathering under the WPN use the based Constructed format that require players to build their decks from their own library of cards. In general, this requires a minimum of sixty cards in the deck, and, except for basic land cards, no more than four cards of the same named card. The pool of cards is also typically limited to the Standard rotation, which consists of the base sets and expansions that have been released in the last two years. The Standard format helps to prevent "power creep" that can be difficult to predict with the size of the Magic card library and help give newer players a fair advantage with long-term players. Other Constructed formats exist that allow for use of older expansions to give more variety for decks. A large variety of formats have been defined by the Wizards Play Network (WPN), formally the DCI, that allow different pools of expansions to be used or alter deck construction rules for special events. In the Limited format, a small number of cards are opened for play from booster packs or tournament packs, and a minimum deck size of forty cards is enforced. The most popular limited format is Booster Draft, in which players open a booster pack, choose a card from it, and pass it to the player seated next to them. This continues until all the cards have been picked, and then a new pack is opened. Three packs are opened in total, and the direction of passing alternates left-right-left. Once the draft is done, players create 40-card decks out of the cards they picked, basic land cards being provided for free, and play games with the players they drafted with. There are no banned or restricted cards in limited formats. Limitations Individual cards may be listed as "restricted", where only one copy can be included in a deck, or simply "banned", at the WPN's discretion. These limitations are usually for balance of power reasons, but have been occasionally made because of gameplay mechanics, for example, with the elimination of the "play for ante" mechanics in all formal formats, all such cards with this feature are banned. During the COVID-19 pandemic which drew more players to the online Magic games and generated volumes of data of popular deck constructions, Wizards was able to track popular combinations more quickly than in a purely paper game, and in mid-2020, banned additional cards that in specific combos could draw out games far longer than desired. Older cards have been banned from all formal play by Wizards due to inappropriate racial or cultural depictions in their text or illustrations in the wake of the George Floyd protests, and their images have been blocked or removed from online Magic databases. This included a card called "Invoke Prejudice," which was displayed on the official card index site Gatherer "at a web URL ending in '1488,' numbers that are synonymous with white supremacy." These cards, Invoke Prejudice, Cleanse, Stone-Throwing Devils, Pradesh Gypsies, Jihad, Imprison, and Crusade, dated back to 1994. Colors of Magic Most cards in Magic are based on one of five colors that make up the game's "Color Wheel" or "Color Pie", shown on the back of each card, and each representing a school or realm of magic: white, blue, black, red, and green. The arrangement of these colors on the wheel describe relationships between the schools, which can broadly affect deck construction and game execution. For a given color such as white, the two colors immediate adjacent to it, green and blue, are considered complementary, while the two colors on the opposite side, black and red, are its opposing schools. The Research and Development (R&D) team at Wizards of the Coast aimed to balance power and abilities among the five colors by using the Color Pie to differentiate the strengths and weaknesses of each. This guideline lays out the capabilities, themes, and mechanics of each color and allows for every color to have its own distinct attributes and gameplay. The Color Pie is used to ensure new cards are thematically in the correct color and do not infringe on the territory of other colors. The concepts behind each of the colors on the Color Wheel, based on a series of articles written by Mark Rosewater, are as follows: White represents order, peace, and light, and draws mana from plains. White planeswalkers can summon individually weak creatures that are collectively strong as a group such as soldiers, as well as powerful creatures and leaders that can impart buffs across all of the player's creatures. Their spells tend to focus on healing or preventing damage, protecting their allies, and neutralizing an opponent's advantages on the battlefield. Blue represents intellect, logic, manipulation, and trickery, and pulls its mana from islands. Its magic is typically associated with the classical elements of air and water. Many of Blue's spells can interact or interfere with the opponent's spells as well as with the general flow of the game. Blue's magic is also associated with control, allowing the player to gain temporary or full control of the opponent's creatures. Blue creatures often tend to be weak but evasive and difficult to target. Black represents power, death, corruption, and sacrifice, drawing mana from swamps. Many of Black's creatures are undead, and several can be sacrificed to make other creatures more powerful, destroy opponent's creatures or permanents, or other effects. Black creatures may be able to draw the life taken in an attack back to their caster, or may even be able to kill creatures through a deathtouch effect. Black's spells similarly coerce sacrifice by the player or their opponent through cards or life. Red represents freedom, chaos, fury, and warfare, pulling its power from mountains. Its powers are associated with the classical fire and earth elements, and tends to have the strongest spells such as fireballs that can be powered-up by tapping additional mana when cast. Red is an offense-oriented class: in addition to powerful creatures like dragons, red planeswalkers can summon weak creatures that can strike quickly to gain the short-term edge. Green is the color of life, nature, evolution, and indulgence, drawing mana from forests. Green has the widest array of creatures to draw upon, ranging across all power levels, and generally is able to dominate the battlefield with many creatures at play at once. Green creatures and spells can generate life points and mana, and can also gain massive strength through spells. Most cards in Magic: The Gathering are based on a single color, shown along the card's border. The cost to play them requires some mana of that color and potentially any amount of mana from any other color. Multicolored cards were introduced in the Legends expansion and typically use a gold border. Their casting cost includes mana from at least two colors plus additional mana from any color. Hybrid cards, included with Ravnica, use a two-color gradient border. These cards can be cast using mana from either color shown, in addition to other mana costs. Finally, colorless cards, such as some artifacts, do not have any colored mana requirements but still require a general amount of mana to be spent to play. The color wheel can influence deck construction choices. Cards from colors that are aligned such as red and green often provide synergistic effects, either due to the core nature of the schools or through designs of cards, but may leave the deck vulnerable to the magic of the common color in conflict, blue in the case of red and green. Alternatively, decks constructed with opposing colors like green and blue may not have many favorable combinations but will be capable of dealing with decks based on any other colors. There are no limits to how many colors can be in a deck, but the more colors in a deck, the more difficult it may be to provide mana of the right color. Luck vs. skill Magic, like many other games, combines chance and skill. One frequent complaint about the game involves the notion that there is too much luck involved, especially concerning possessing too many or too few lands. Early in the game especially, too many or too few lands could ruin a player's chance at victory without the player having made a mistake. This in-game statistical variance can be minimized by proper deck construction, as an appropriate land count can reduce mana problems. In Duels of the Planeswalkers 2012, the land count is automatically adjusted to 40% of the total deck size. A "mulligan" rule was introduced into the game, first informally in casual play and then in the official game rules. In multiplayer, a player may take one mulligan without penalty, while subsequent mulligans will still cost one card (a rule known as "Partial Paris mulligan"). The original mulligan allowed a player a single redraw of seven new cards if that player's initial hand contained seven or zero lands. A variation of this rule called a "forced mulligan" is still used in some casual play circles and in multiplayer formats on Magic Online, and allows a single "free" redraw of seven new cards if a player's initial hand contains seven, six, one or zero lands. With the release of the Core Set 2020, a new mulligan system was introduced for competitive play known as the London Mulligan. Under this rule, after taking a mulligan, the player redraws 7 new cards, and then chooses 1 card to place on the bottom of their library for each mulligan they have taken (or chooses to mulligan again, drawing another 7 cards.) This mulligan rule is generally considered less punishing to mulligans than the prior mulligan rule, in which a player would simply draw one less card each time they mulliganed, rather than drawing 7 new cards after each mulligan, and subsequently choosing to “bottom” one card per mulligan taken. Confessing his love for games combining both luck and skill, Magic creator Richard Garfield admitted its influence in his design of Magic. In addressing the complaint about luck influencing a game, Garfield states that new and casual players tend to appreciate luck as a leveling effect, since randomness can increase their chances of winning against a more skilled player. Meanwhile, a player with higher skills appreciates a game with less chance, as the higher degree of control increases their chances of winning. According to Garfield, Magic has and would likely continue decreasing its degree of luck as the game matured. The "Mulligan rule", as well as card design, past vs. present, are good examples of this trend. He feels that this is a universal trend for maturing games. Garfield explained using chess as an example, that unlike modern chess, in predecessors, players would use dice to determine which chess piece to move. Gambling The original set of rules prescribed that all games were to be played for ante. Garfield was partly inspired by the game of marbles and added this rule because he wanted the players to play with the cards rather than simply collect them. The ante rule stated that each player must remove a card at random from the deck they wished to play with before the game began, and the two cards would be set aside together as the ante. At the end of the match, the winner would take and keep both cards. Early sets included a few cards with rules designed to interact with this gambling aspect, allowing replacements of cards up for ante, adding more cards to the ante, or even permanently trading ownership of cards in play. The ante concept became controversial because many regions had restrictions on games of chance. The ante rule was soon made optional because of these restrictions and because of players' reluctance to possibly lose a card that they owned. The gambling rule was also forbidden at sanctioned events. The last card to mention ante was printed in the 1995 expansion set Homelands. Organized play The Wizards Play Network (WPN), formally the DCI, is the organizing body for sanctioned Magic events; it is owned and operated by Wizards of the Coast. The WPN establishes the set allowances and card restrictions for the Constructed and Limited formats for regulation play for tournaments as well as for other events. "Thousands of games shops" participate in Friday Night Magic (FNM), an event sponsored by the WPN; it is advertised as "the event where new players can approach the game, and start building their community". FNM offers both sanctioned tournament formats and all casual formats. In 2018, The New Yorker reported that "even as it has grown in popularity and size, Magic flies low to the ground. It thrives on the people who gather at lunch tables, | of the "play for ante" mechanics in all formal formats, all such cards with this feature are banned. During the COVID-19 pandemic which drew more players to the online Magic games and generated volumes of data of popular deck constructions, Wizards was able to track popular combinations more quickly than in a purely paper game, and in mid-2020, banned additional cards that in specific combos could draw out games far longer than desired. Older cards have been banned from all formal play by Wizards due to inappropriate racial or cultural depictions in their text or illustrations in the wake of the George Floyd protests, and their images have been blocked or removed from online Magic databases. This included a card called "Invoke Prejudice," which was displayed on the official card index site Gatherer "at a web URL ending in '1488,' numbers that are synonymous with white supremacy." These cards, Invoke Prejudice, Cleanse, Stone-Throwing Devils, Pradesh Gypsies, Jihad, Imprison, and Crusade, dated back to 1994. Colors of Magic Most cards in Magic are based on one of five colors that make up the game's "Color Wheel" or "Color Pie", shown on the back of each card, and each representing a school or realm of magic: white, blue, black, red, and green. The arrangement of these colors on the wheel describe relationships between the schools, which can broadly affect deck construction and game execution. For a given color such as white, the two colors immediate adjacent to it, green and blue, are considered complementary, while the two colors on the opposite side, black and red, are its opposing schools. The Research and Development (R&D) team at Wizards of the Coast aimed to balance power and abilities among the five colors by using the Color Pie to differentiate the strengths and weaknesses of each. This guideline lays out the capabilities, themes, and mechanics of each color and allows for every color to have its own distinct attributes and gameplay. The Color Pie is used to ensure new cards are thematically in the correct color and do not infringe on the territory of other colors. The concepts behind each of the colors on the Color Wheel, based on a series of articles written by Mark Rosewater, are as follows: White represents order, peace, and light, and draws mana from plains. White planeswalkers can summon individually weak creatures that are collectively strong as a group such as soldiers, as well as powerful creatures and leaders that can impart buffs across all of the player's creatures. Their spells tend to focus on healing or preventing damage, protecting their allies, and neutralizing an opponent's advantages on the battlefield. Blue represents intellect, logic, manipulation, and trickery, and pulls its mana from islands. Its magic is typically associated with the classical elements of air and water. Many of Blue's spells can interact or interfere with the opponent's spells as well as with the general flow of the game. Blue's magic is also associated with control, allowing the player to gain temporary or full control of the opponent's creatures. Blue creatures often tend to be weak but evasive and difficult to target. Black represents power, death, corruption, and sacrifice, drawing mana from swamps. Many of Black's creatures are undead, and several can be sacrificed to make other creatures more powerful, destroy opponent's creatures or permanents, or other effects. Black creatures may be able to draw the life taken in an attack back to their caster, or may even be able to kill creatures through a deathtouch effect. Black's spells similarly coerce sacrifice by the player or their opponent through cards or life. Red represents freedom, chaos, fury, and warfare, pulling its power from mountains. Its powers are associated with the classical fire and earth elements, and tends to have the strongest spells such as fireballs that can be powered-up by tapping additional mana when cast. Red is an offense-oriented class: in addition to powerful creatures like dragons, red planeswalkers can summon weak creatures that can strike quickly to gain the short-term edge. Green is the color of life, nature, evolution, and indulgence, drawing mana from forests. Green has the widest array of creatures to draw upon, ranging across all power levels, and generally is able to dominate the battlefield with many creatures at play at once. Green creatures and spells can generate life points and mana, and can also gain massive strength through spells. Most cards in Magic: The Gathering are based on a single color, shown along the card's border. The cost to play them requires some mana of that color and potentially any amount of mana from any other color. Multicolored cards were introduced in the Legends expansion and typically use a gold border. Their casting cost includes mana from at least two colors plus additional mana from any color. Hybrid cards, included with Ravnica, use a two-color gradient border. These cards can be cast using mana from either color shown, in addition to other mana costs. Finally, colorless cards, such as some artifacts, do not have any colored mana requirements but still require a general amount of mana to be spent to play. The color wheel can influence deck construction choices. Cards from colors that are aligned such as red and green often provide synergistic effects, either due to the core nature of the schools or through designs of cards, but may leave the deck vulnerable to the magic of the common color in conflict, blue in the case of red and green. Alternatively, decks constructed with opposing colors like green and blue may not have many favorable combinations but will be capable of dealing with decks based on any other colors. There are no limits to how many colors can be in a deck, but the more colors in a deck, the more difficult it may be to provide mana of the right color. Luck vs. skill Magic, like many other games, combines chance and skill. One frequent complaint about the game involves the notion that there is too much luck involved, especially concerning possessing too many or too few lands. Early in the game especially, too many or too few lands could ruin a player's chance at victory without the player having made a mistake. This in-game statistical variance can be minimized by proper deck construction, as an appropriate land count can reduce mana problems. In Duels of the Planeswalkers 2012, the land count is automatically adjusted to 40% of the total deck size. A "mulligan" rule was introduced into the game, first informally in casual play and then in the official game rules. In multiplayer, a player may take one mulligan without penalty, while subsequent mulligans will still cost one card (a rule known as "Partial Paris mulligan"). The original mulligan allowed a player a single redraw of seven new cards if that player's initial hand contained seven or zero lands. A variation of this rule called a "forced mulligan" is still used in some casual play circles and in multiplayer formats on Magic Online, and allows a single "free" redraw of seven new cards if a player's initial hand contains seven, six, one or zero lands. With the release of the Core Set 2020, a new mulligan system was introduced for competitive play known as the London Mulligan. Under this rule, after taking a mulligan, the player redraws 7 new cards, and then chooses 1 card to place on the bottom of their library for each mulligan they have taken (or chooses to mulligan again, drawing another 7 cards.) This mulligan rule is generally considered less punishing to mulligans than the prior mulligan rule, in which a player would simply draw one less card each time they mulliganed, rather than drawing 7 new cards after each mulligan, and subsequently choosing to “bottom” one card per mulligan taken. Confessing his love for games combining both luck and skill, Magic creator Richard Garfield admitted its influence in his design of Magic. In addressing the complaint about luck influencing a game, Garfield states that new and casual players tend to appreciate luck as a leveling effect, since randomness can increase their chances of winning against a more skilled player. Meanwhile, a player with higher skills appreciates a game with less chance, as the higher degree of control increases their chances of winning. According to Garfield, Magic has and would likely continue decreasing its degree of luck as the game matured. The "Mulligan rule", as well as card design, past vs. present, are good examples of this trend. He feels that this is a universal trend for maturing games. Garfield explained using chess as an example, that unlike modern chess, in predecessors, players would use dice to determine which chess piece to move. Gambling The original set of rules prescribed that all games were to be played for ante. Garfield was partly inspired by the game of marbles and added this rule because he wanted the players to play with the cards rather than simply collect them. The ante rule stated that each player must remove a card at random from the deck they wished to play with before the game began, and the two cards would be set aside together as the ante. At the end of the match, the winner would take and keep both cards. Early sets included a few cards with rules designed to interact with this gambling aspect, allowing replacements of cards up for ante, adding more cards to the ante, or even permanently trading ownership of cards in play. The ante concept became controversial because many regions had restrictions on games of chance. The ante rule was soon made optional because of these restrictions and because of players' reluctance to possibly lose a card that they owned. The gambling rule was also forbidden at sanctioned events. The last card to mention ante was printed in the 1995 expansion set Homelands. Organized play The Wizards Play Network (WPN), formally the DCI, is the organizing body for sanctioned Magic events; it is owned and operated by Wizards of the Coast. The WPN establishes the set allowances and card restrictions for the Constructed and Limited formats for regulation play for tournaments as well as for other events. "Thousands of games shops" participate in Friday Night Magic (FNM), an event sponsored by the WPN; it is advertised as "the event where new players can approach the game, and start building their community". FNM offers both sanctioned tournament formats and all casual formats. In 2018, The New Yorker reported that "even as it has grown in popularity and size, Magic flies low to the ground. It thrives on the people who gather at lunch tables, in apartments, or in one of the six thousand stores worldwide that Wizards has licensed to put on weekly tournaments dubbed Friday Night Magic". FNM tournaments can act as a stepping-stone to more competitive play. Tournaments Magic tournaments regularly occur in gaming stores and other venues. Larger tournaments with hundreds of competitors from around the globe sponsored by Wizards of the Coast are arranged many times every year, with substantial cash prizes for the top finishers. A number of websites report on tournament news, give complete lists for the most currently popular decks, and feature articles on current issues of debate about the game. Additionally, the WPN maintains a set of rules for being able to sanction tournaments, as well as runs its own circuit. The WPN runs the Pro Tour as a series of major tournaments to attract interest. The right to compete in a Pro Tour has to be earned by either winning a Pro Tour Qualifier Tournament or being successful in a previous tournament on a similar level. A Pro Tour is usually structured into two days of individual competition played in the Swiss format. On the final day, the top eight players compete with each other in an elimination format to select the winner. At the end of the competition in a Pro Tour, players are awarded Pro Points depending on their finishing place. If the player finishes high enough, they will also be awarded prize money. Frequent winners of these events have made names for themselves in the Magic community, such as Gabriel Nassif, Kai Budde and Jon Finkel. As a promotional tool, the DCI launched the Hall of Fame in 2005 to honor selected players. At the end of the year the Magic World Championship is held. The World Championship functions like a Pro Tour, except that competitors have to present their skill in three different formats (usually Standard, booster draft and a second constructed format) rather than one. Another difference is that invitation to the World Championship can be gained not through Pro Tour Qualifiers, but via the national championship of a country. Most countries send their top four players of the tournament as representatives, though nations with minor Magic playing communities may send just one player. The World Championship also has a team-based competition, where the national teams compete with each other. At the beginning of the World Championship, new members are inducted into the Hall of Fame. The tournament also concludes the current season of tournament play and at the end of the event, the player who earned the most Pro Points during the year is awarded the title "Pro Player of the Year". The player who earned the most Pro Points and did not compete in any previous season is awarded the title "Rookie of the Year". Invitation to a Pro Tour, Pro Points and prize money can also be earned in lesser tournaments called Grand Prix that are open to the general public and are held more frequently throughout the year. Grand Prix events are usually the largest Magic tournaments, sometimes drawing more than 2,000 players. The largest Magic tournament ever held was Grand Prix: Las Vegas in June 2013 with a total of 4,500 players. Development Inception Richard Garfield had an early attachment to games during his youth: before settling down in Oregon, his father, an architect, had brought his family to Bangladesh and Nepal during his work projects. Garfield did not speak the native languages, but was able to make friends with the local youth through playing cards or marbles. Once back in the United States, he had heard of Dungeons & Dragons but neither his local game store nor his friends had a copy, so he developed his own version of what he thought the game would be based on the descriptions he had read, which considered closer to Clue, with players moving from room to room fighting monsters with a fixed end-goal. When Garfield eventually got copies of the Dungeon & Dragons rulesets, he was surprised that it was a more open-ended game but was "dreadfully written". Dungeon & Dragonss open-endedness inspired him, like many others, to develop their own game ideas from it. For Garfield, this was a game he called Five Magics, based on five elemental magics that were drawn from geographically-diverse areas. While this remained the core concept of Five Magics, Garfield continued to refine the game while growing up, often drastically changing the base type of game, though never planned to publish this game. In 1991, Garfield was a doctoral candidate in combinatorial mathematics at University of Pennsylvania and had been brought on as an adjunct professor at Whitman College. During his candidacy, he developed his ideas and had playtested RoboRally, a board game based on moving robots through a factory filled with hazards. Garfield had been seeking publishers for the title, and his colleague, Mike Davis, suggested the newly formed Wizards of the Coast, a small outfit established by Peter Adkison, a systems analyst for Boeing in Seattle. In mid-1991, the three arranged to meet in Oregon near Garfield's parents' home. Adkison was impressed by RoboRally but considered that it had too many logistics and would be too risky for him to publish. He told Garfield and Davis that he liked Garfield's ideas and that he was looking for a portable game that could be played in the downtime that frequently occurs at gaming conventions. After the meeting, Garfield remained in Oregon to contemplate Adkison's advice. While hiking near Multnomah Falls, he was inspired to take his Five Magics concept but apply it to collectible color-themed cards, so that each player could make a customizable deck, something each player could consider part of their identity. Garfield arranged to meet with Adkison back in Seattle within the week, and when Adkison heard the idea, he recognized the potential that this would be a game that could be expanded on indefinitely with new cards in contrast to most typical tabletop games; Adkison later wrote on the idea on a USENET post "If executed properly, [the cards] would make us millions." Adkison immediately agreed to produce it. Initial design Garfield returned to Pennsylvania and set off designing the game's core rules and initial cards, with about 150 completed in the few months after his return. The type of gameplay centered on each color remained consistent with how Five Magics had been and with how Magic: The Gathering would stay in the future, such as red representing aggressive attacks. Other games also influenced the design at this point, with Garfield citing games like Cosmic Encounter and Strat-o-matic Baseball as games that differ each time they are played because of different sets of cards being in play. Initial "cards" were based on using available copyrighted art, and copied to paper to be tested by groups of volunteers at the university. About six months after the meeting with Adkison, Garfield had refined the first complete version of his game. Garfield also began to set the narrative of the game in "Dominia", a multiverse of infinite "planes" from which players, as wizards, can draw power from, which would allow for the vast array of creatures and magics that he was planning for the cards. Garfield has stated that two major influences in his creation of Magic: the Gathering were the games Cosmic Encounter, which first used the concept that normal rules could sometimes be overridden, and Dungeons & Dragons. The "Golden Rule of Magic" states that "Whenever a card's text directly contradicts the rules, the card takes precedence." The Comprehensive Rules, a detailed rulebook, exists to clarify conflicts. Simultaneously, Adkison sought investment into Wizards of the Coast to prepare to publish the game. The company had already committed to completing The Primal Order rulebook, aimed to be compatible with most other role-playing systems on the market, which most investment was drawn to. He had to bring in a number of local Cornish artists to create the fantasy art for Garfield's cards, offering them shares in Wizards of the Coast in payment. After The Primal Order was published in 1992, Wizards of the Coast was sued by Palladium for copyright infringement, a case that was settled out of court and with the result that a second printing of The Primal Order removed the rules relevant to Palladium's system, but this case also financially harmed Wizards of the Coast. Adkison decided to create a separate company, Garfield Games, for publishing the card game. While the game was simply called Magic through most of playtesting, when the game had to be officially named a lawyer informed them that the name Magic was too generic to be trademarked. Mana Clash was instead chosen to be the name used in the first solicitation of the game. However, everybody involved with the game continued to refer to it simply as Magic. After further legal consultation, it was decided to rename the game Magic: The Gathering, thus enabling the name to be trademarked. First releases By 1993, Garfield and Adkison had gotten everything ready to premiere Magic: The Gathering at that year's Gen Con in Milwaukee that August, but did not have the funds for a production run to have shipped to game stores in time. Adkison took a single box of cards with a handful of complete decks to the Wizards booth at Origins Game Fair hoping to secure the funds by demonstrating the game. Among those he demonstrated to were representatives of Wargames West, manufacturers of historical tactics games; the representatives eventually brought their CEO over, and after seeing the game, took Adkison to dinner and negotiated funding terms. Adkison returned with , enough to make the necessary orders. Magic: The Gathering underwent a general release on August 5, 1993. After shipping the orders, Adkison and his wife drove towards Milwaukee while making stops at game stores and demonstrate the game to drum up support for Gen Con. Their initial stops were quiet, but word of mouth from previous stops spread, and as they traveled south and west, they found larger and larger crowds anxiously awaiting their arrival. Garfield met up with Adkison at Gen Con, where their shipment of 2.5 million cards had been delayed a day. Despite this, by the end of the convention, they had completely sold out. Magic was an immediate success for Wizards of the Coast. By October 1993, they had sold out their supply of 10 million cards. Wizards was even reluctant to advertise the game because they were unable to keep pace with existing demand. Initially Magic attracted many Dungeons & Dragons players, but the following included all types of other people as well. Expansions The success of the initial edition prompted a reissue later in 1993, along with expansions to the game. Arabian Nights was released as the first expansion in December 1993. New expansions and revisions of the base game ("Core Sets") have since been released on a regular basis, amounting to four releases a year. By the end of 1994, the game had printed over a billion cards. Until the release of Mirage in 1996, expansions were released on an irregular basis. Beginning in 2009 one revision of the core set and a set of three related expansions called a "block" were released every year. This system was revised in 2015, with the Core Set being eliminated and blocks now consisting of two sets, released semiannually. A further revision occurred in 2018, reversing the elimination of the core sets and no longer constraining sets to blocks. While the essence of the game has always stayed the same, the rules of Magic have undergone three major revisions with the release of the Revised Edition in 1994, Classic Edition in 1999, and Magic 2010 in July 2009. With the release of the Eighth Edition in 2003, Magic also received a major visual redesign. In 1996, Wizards of the Coast established the "Pro Tour", a circuit of tournaments where players can compete for sizeable cash prizes over the course of a single weekend-long tournament. In 2009 the top prize at a single tournament was US$40,000. Sanctioned through the DCI, the tournaments added an element of prestige to the game by virtue of the cash payouts and media coverage from within the community. For a brief period of time, ESPN2 televised the tournaments. By April 1997, billion cards had been sold. In 1999, Wizards of The Coast was acquired by Hasbro for $325 million, making Magic a Hasbro game. A patent was granted to Wizards of the Coast in 1997 for "a novel method of game play and game components that in one embodiment are in the form of trading cards" that includes claims covering games whose rules include many of Magic'''s elements in combination, including concepts such as changing orientation of a game component to indicate use (referred to in the rules of Magic and later of Garfield's games such as Vampire: The Eternal Struggle as "tapping") and constructing a deck by selecting cards from a larger pool. The patent has aroused criticism from some observers, who believe some of its claims to be invalid. In 2003, the patent was an element of a larger legal dispute between Wizards of the Coast and Nintendo, regarding trade secrets related to Nintendo's Pokémon Trading Card Game. The legal action was settled out of court, and its terms were not disclosed. While unofficial methods of online play existed previously, Magic Online (often shortened to "MTGO" or "Modo"), an official online version of the game, was released in 2002. A new, updated version of Magic Online was released in April 2008. In February 2018, Wizards noted that between the years of 2008 and 2016 they had printed over 20 billion Magic: the Gathering cards. Production and marketingMagic: The Gathering cards are produced in much the same way as normal playing cards. Each Magic card, approximately 63 × 88 mm in size (2.5 by 3.5 inches), has a face which displays the card's name and rules text as well as an illustration appropriate to the card's concept. 23,318 unique cards have been produced for the game , many of them with variant editions, artwork, or layouts, and 600–1000 new ones are added each year. The first Magic cards were printed exclusively in English, but current sets are also printed in Simplified Chinese, Traditional Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish. The overwhelming majority of Magic cards are issued and marketed in the form of sets. For the majority of its history there were two types: the Core Set and the themed expansion sets. Under Wizards of the Coast's current production and marketing scheme, a new set is released quarterly. Various products are released with each set to appeal to different segments of the Magic playing community: The majority of cards are sold in booster packs, which contain fifteen cards normally divided into four rarities, which can be differentiated by the color of the expansion symbol. A fifteen-card Booster Pack will typically contain one rare (gold), three uncommons (silver), ten commons (black), and one basic land (colored black, as commons). Sets prior to Shards of Alara contained eleven commons instead of |
which were developed mainly for the need of surveying and architecture. A fundamental innovation was the elaboration of proofs by ancient Greeks: it is not sufficient to verify by measurement that, say, two lengths are equal. Such a property must be proved by abstract reasoning from previously proven results (theorems) and basic properties (which are considered as self-evident because they are too basic for being the subject of a proof (postulates)). This principle, which is foundational for all mathematics, was elaborated for the sake of geometry, and was systematized by Euclid around 300 BC in his book Elements. The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the (three-dimensional) Euclidean space. Euclidean geometry was developed without a change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This was a major change of paradigm, since instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using numbers (their coordinates), and for the use of algebra and later, calculus for solving geometrical problems. This split geometry in two parts that differ only by their methods, synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically. Analytic geometry allows the study of new shapes, in particular curves that are not related to circles and lines; these curves are defined either as graph of functions (whose study led to differential geometry), or by implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry makes it possible to consider spaces dimensions higher than three (it suffices to consider more than three coordinates), which are no longer a model of the physical space. Geometry expanded quickly during the 19th century. A major event was the discovery (in the second half of the 19th century) of non-Euclidean geometries, which are geometries where the parallel postulate is abandoned. This is, besides Russel's paradox, one of the starting points of the foundational crisis of mathematics, by taking into question the truth of the aforementioned postulate. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that are invariant under specific transformations of the space. This results in a number of subareas and generalizations of geometry that include: Projective geometry, introduced in the 16th century by Girard Desargues, it extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by avoiding to have a different treatment for intersecting and parallel lines. Affine geometry, the study of properties relative to parallelism and independent from the concept of length. Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions Manifold theory, the study of shapes that are not necessarily embedded in a larger space Riemannian geometry, the study of distance properties in curved spaces Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials Topology, the study of properties that are kept under continuous deformations Algebraic topology, the use in topology of algebraic methods, mainly homological algebra Discrete geometry, the study of finite configurations in geometry Convex geometry, the study of convex sets, which takes its importance from its applications in optimization Complex geometry, the geometry obtained by replacing real numbers with complex numbers Algebra Algebra may be viewed as the art of manipulating equations and formulas. Diophantus (3d century) and Al-Khwarizmi (9th century) were two main precursors of algebra. The first one solved some relations between unknown natural numbers (that is, equations) by deducing new relations until getting the solution. The second one introduced systematic methods for transforming equations (such as moving a term from a side of an equation into the other side). The term algebra is derived from the Arabic word that he used for naming one of these methods in the title of his main treatise. Algebra began to be a specific area only with François Viète (1540–1603), who introduced the use of letters (variables) for representing unknown or unspecified numbers. This allows describing concisely the operations that have to be done on the numbers represented by the variables. Until the 19th century, algebra consisted mainly of the study of linear equations that is called presently linear algebra, and polynomial equations in a single unknown, which were called algebraic equations (a term that is still in use, although it may be ambiguous). During the 19th century, variables began to represent other things than numbers (such as matrices, modular integers, and geometric transformations), on which some operations can operate, which are often generalizations of arithmetic operations. For dealing with this, the concept of algebraic structure was introduced, which consist of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. So, the scope of algebra evolved for becoming essentially the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, the latter term being still used, mainly in an educational context, in opposition with elementary algebra which is concerned with the older way of manipulating formulas. Some types of algebraic structures have properties that are useful, and often fundamental, in many areas of mathematics. Their study are nowadays autonomous parts of algebra, which include: group theory; field theory; vector spaces, whose study is essentially the same as linear algebra; ring theory; commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry; homological algebra Lie algebra and Lie group theory; Boolean algebra, which is widely used for the study of the logical structure of computers. The study of types algebraic structures as mathematical objects is the object of universal algebra and category theory. The latter applies to every mathematical structure (not only the algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology. Calculus and analysis Calculus, formerly called infinitesimal calculus, was introduced in the 17th century by Newton and Leibniz, independently and simultaneously. It is fundamentally the study of the relationship of two changing quantities, called variables, such that one depends on the other. Calculus was largely expanded in the 18th century by Euler, with the introduction of the concept of a function, and many other results. Presently "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts. Analysis is further subdivided into real analysis, where variables represent real numbers and complex analysis where variables represent complex numbers. Presently there are many subareas of analysis, some being shared with other areas of mathematics; they include: Multivariable calculus Functional analysis, where variables represent varying functions; Integration, measure theory and potential theory, all strongly related with Probability theory; Ordinary differential equations; Partial differential equations; Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications of mathematics. Discrete mathematics Mathematical logic and set theory These subjects belong to mathematics since the end of the 19th century. Before this period, sets were not considered as mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy, and was not specifically studied by mathematicians. Before the study of infinite sets by Georg Cantor, mathematicians were reluctant to consider collections that are actually infinite, and considered infinity as the result of an endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets, but also by showing that this implies different sizes of infinity (see Cantor's diagonal argument) and the existence of mathematical objects that cannot be computed, and not even be explicitly described (for example, Hamel bases of the real numbers over the rational numbers). This led to the controversy over Cantor's set theory. In the same period, it appeared in various areas of mathematics that the former intuitive definitions of the basic mathematical objects were insufficient for insuring mathematical rigour. Examples of such intuitive definitions are "a set is a collection of objects", "natural number is what is used for counting", "a point is a shape with a zero length in every direction", "a curve is a trace left by a moving point", etc. This is the origin of the foundational crisis of mathematics. It has been eventually solved in the mainstream of mathematics by systematize the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number as a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and finding proofs. This approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every theory that contains the natural numbers, there are theorems that are true (that is provable in a larger theory), but not provable inside the theory. This approach of the foundations of the mathematics was challenged during the first half of the 20th century by mathematicians leaded by L. E. J. Brouwer who promoted an intuitionistic logic that excludes the law of excluded middle. These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theory), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, program certification, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories. Applied mathematics Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics. Statistics and other decision sciences Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments; the design of a statistical sample or experiment specifies the analysis of the data (before the data becomes available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference—with model selection and estimation; the estimated models and consequential predictions should be tested on new data. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics. Computational mathematics Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large for human numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretisation with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation. History The history of mathematics can be seen as an ever-increasing series of abstractions. Evolutionarily speaking, the first abstraction to ever take place, which is shared by many animals, was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely the quantity of their members. As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time—days, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 , when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, with Greek mathematics the Ancient Greeks began a systematic study of mathematics as a subject in its own right. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes (c. 287–212 BC) of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus of Nicaea, 2nd century BC), and the beginnings of algebra (Diophantus, 3rd century AD). The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition and approximation of sine and cosine, and an early form of infinite series. During the Golden Age of Islam, especially during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics. The most notable achievement of Islamic mathematics was the development of algebra. Other achievements of the Islamic period include advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe. The development of calculus by Isaac Newton and Gottfried Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries. Perhaps the foremost mathematician of the 19th century was the German mathematician Carl Gauss, who made numerous contributions to fields such as algebra, analysis, differential geometry, matrix theory, number theory, and statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show in part that any consistent axiomatic system—if powerful enough to describe arithmetic—will contain true propositions that cannot be proved. Mathematics has since been greatly extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made to this very day. According to Mikhail B. Sevryuk, in the January 2006 issue of the Bulletin of the American Mathematical Society, "The number of papers and books included in the Mathematical Reviews database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year. The overwhelming majority of works in this ocean contain new mathematical theorems and their proofs." Etymology The word mathematics comes from Ancient Greek máthēma (), meaning "that which is learnt," "what one gets to know," hence also "study" and "science". The word for "mathematics" came to have the narrower and more technical meaning "mathematical study" even in Classical times. Its adjective is mathēmatikós (), meaning "related to learning" or "studious," which likewise further came to mean "mathematical." In particular, mathēmatikḗ tékhnē (; ) meant "the mathematical art." Similarly, one of the two main schools of thought in Pythagoreanism was known as the mathēmatikoi (μαθηματικοί)—which at the time meant "learners" rather than "mathematicians" in the modern sense. In Latin, and in English until around 1700, the term mathematics more commonly meant "astrology" (or sometimes "astronomy") rather than "mathematics"; the meaning gradually changed to its present one from about 1500 to 1800. This has resulted in several mistranslations. For example, Saint Augustine's warning that Christians should beware of mathematici, meaning astrologers, is sometimes mistranslated as a condemnation of mathematicians. The apparent plural form in English, like the French plural form (and the less commonly used singular derivative ), goes back to the Latin neuter plural (Cicero), based on the Greek plural ta mathēmatiká (), used by Aristotle (384–322 BC), and meaning roughly "all things mathematical", although it is plausible that English borrowed only the adjective mathematic(al) and formed the noun mathematics anew, after the pattern of physics and metaphysics, which were inherited from Greek. In English, the noun mathematics takes a singular verb. It is often shortened to maths or, in North America, math. Philosophy of mathematics There is no general | gave rise to a dramatic increase in the number of mathematics areas and their fields of applications; a witness of this is the Mathematics Subject Classification, which lists more than sixty first-level areas of mathematics. Areas of mathematics Before the Renaissance, mathematics was divided into two main areas: arithmetic, devoted to the manipulation of numbers, and geometry, devoted to the study of shapes. There was also some pseudoscience, such as numerology and astrology, that were not clearly distinguished from mathematics. Around the Renaissance, two new main areas appeared. The introduction of mathematical notation led to algebra, which, roughly speaking, consists of the study and the manipulation of formulas. Calculus, a shorthand of infinitesimal calculus and integral calculus, is the study of continuous functions, which model the change of, and the relationship between varying quantities (variables). This division into four main areas remained valid until the end of the 19th century, although some areas, such as celestial mechanics and solid mechanics, which were often considered as mathematics, are now considered as belonging to physics. Also, some subjects developed during this period predate mathematics (being divided into different) areas, such as probability theory and combinatorics, which only later became regarded as autonomous areas of their own. At the end of the 19th century, the foundational crisis in mathematics and the resulting systematization of the axiomatic method led to an explosion in the amount of areas of mathematics. The Mathematics Subject Classification contains more than 60 first-level areas. Some of these areas correspond to the older division in four main areas. This is the case of number theory (the modern name for higher arithmetic) and Geometry. However, there are several other first-level areas that have "geometry" in their name or are commonly considered as belonging to geometry. Algebra and calculus do not appear as first-level areas, but are each split into several first-level areas. Other first-level areas did not exist at all before the 20th century (for example category theory; homological algebra, and computer science) or were not considered before as mathematics, such as 03:Mathematical logic and foundations (including model theory, computability theory, set theory, proof theory, and algebraic logic). Number theory Number theory started with the manipulation of numbers, that is, natural numbers and later expanded to integers and rational numbers Number theory was formerly called arithmetic, but nowadays this term is mostly used for the methods of calculation with numbers. A specificity of number theory is that many problems that can be stated very elementarily are very difficult, and, when solved, have a solution that require very sophisticated methods coming from various parts of mathematics. A notable example is Fermat's Last theorem that was stated in 1637 by Pierre de Fermat and proved only in 1994 by Andrew Wiles, using, among other tools, algebraic geometry (more specifically scheme theory), category theory and homological algebra. Another example is Goldbach's conjecture, that asserts that every even integer greater than 2 is the sum of two prime numbers. Stated in 1742 by Christian Goldbach it remains unproven despite considerable effort. In view of the diversity of the studied problems and the solving methods, number theory is presently split in several subareas, which include analytic number theory, algebraic number theory, geometry of numbers (method oriented), Diophantine equations and transcendence theory (problem oriented). Geometry Geometry is, with arithmetic, one of the oldest branches of mathematics. It started with empirical recipes concerning shapes, such as lines, angles and circles, which were developed mainly for the need of surveying and architecture. A fundamental innovation was the elaboration of proofs by ancient Greeks: it is not sufficient to verify by measurement that, say, two lengths are equal. Such a property must be proved by abstract reasoning from previously proven results (theorems) and basic properties (which are considered as self-evident because they are too basic for being the subject of a proof (postulates)). This principle, which is foundational for all mathematics, was elaborated for the sake of geometry, and was systematized by Euclid around 300 BC in his book Elements. The resulting Euclidean geometry is the study of shapes and their arrangements constructed from lines, planes and circles in the Euclidean plane (plane geometry) and the (three-dimensional) Euclidean space. Euclidean geometry was developed without a change of methods or scope until the 17th century, when René Descartes introduced what is now called Cartesian coordinates. This was a major change of paradigm, since instead of defining real numbers as lengths of line segments (see number line), it allowed the representation of points using numbers (their coordinates), and for the use of algebra and later, calculus for solving geometrical problems. This split geometry in two parts that differ only by their methods, synthetic geometry, which uses purely geometrical methods, and analytic geometry, which uses coordinates systemically. Analytic geometry allows the study of new shapes, in particular curves that are not related to circles and lines; these curves are defined either as graph of functions (whose study led to differential geometry), or by implicit equations, often polynomial equations (which spawned algebraic geometry). Analytic geometry makes it possible to consider spaces dimensions higher than three (it suffices to consider more than three coordinates), which are no longer a model of the physical space. Geometry expanded quickly during the 19th century. A major event was the discovery (in the second half of the 19th century) of non-Euclidean geometries, which are geometries where the parallel postulate is abandoned. This is, besides Russel's paradox, one of the starting points of the foundational crisis of mathematics, by taking into question the truth of the aforementioned postulate. This aspect of the crisis was solved by systematizing the axiomatic method, and adopting that the truth of the chosen axioms is not a mathematical problem. In turn, the axiomatic method allows for the study of various geometries obtained either by changing the axioms or by considering properties that are invariant under specific transformations of the space. This results in a number of subareas and generalizations of geometry that include: Projective geometry, introduced in the 16th century by Girard Desargues, it extends Euclidean geometry by adding points at infinity at which parallel lines intersect. This simplifies many aspects of classical geometry by avoiding to have a different treatment for intersecting and parallel lines. Affine geometry, the study of properties relative to parallelism and independent from the concept of length. Differential geometry, the study of curves, surfaces, and their generalizations, which are defined using differentiable functions Manifold theory, the study of shapes that are not necessarily embedded in a larger space Riemannian geometry, the study of distance properties in curved spaces Algebraic geometry, the study of curves, surfaces, and their generalizations, which are defined using polynomials Topology, the study of properties that are kept under continuous deformations Algebraic topology, the use in topology of algebraic methods, mainly homological algebra Discrete geometry, the study of finite configurations in geometry Convex geometry, the study of convex sets, which takes its importance from its applications in optimization Complex geometry, the geometry obtained by replacing real numbers with complex numbers Algebra Algebra may be viewed as the art of manipulating equations and formulas. Diophantus (3d century) and Al-Khwarizmi (9th century) were two main precursors of algebra. The first one solved some relations between unknown natural numbers (that is, equations) by deducing new relations until getting the solution. The second one introduced systematic methods for transforming equations (such as moving a term from a side of an equation into the other side). The term algebra is derived from the Arabic word that he used for naming one of these methods in the title of his main treatise. Algebra began to be a specific area only with François Viète (1540–1603), who introduced the use of letters (variables) for representing unknown or unspecified numbers. This allows describing concisely the operations that have to be done on the numbers represented by the variables. Until the 19th century, algebra consisted mainly of the study of linear equations that is called presently linear algebra, and polynomial equations in a single unknown, which were called algebraic equations (a term that is still in use, although it may be ambiguous). During the 19th century, variables began to represent other things than numbers (such as matrices, modular integers, and geometric transformations), on which some operations can operate, which are often generalizations of arithmetic operations. For dealing with this, the concept of algebraic structure was introduced, which consist of a set whose elements are unspecified, of operations acting on the elements of the set, and rules that these operations must follow. So, the scope of algebra evolved for becoming essentially the study of algebraic structures. This object of algebra was called modern algebra or abstract algebra, the latter term being still used, mainly in an educational context, in opposition with elementary algebra which is concerned with the older way of manipulating formulas. Some types of algebraic structures have properties that are useful, and often fundamental, in many areas of mathematics. Their study are nowadays autonomous parts of algebra, which include: group theory; field theory; vector spaces, whose study is essentially the same as linear algebra; ring theory; commutative algebra, which is the study of commutative rings, includes the study of polynomials, and is a foundational part of algebraic geometry; homological algebra Lie algebra and Lie group theory; Boolean algebra, which is widely used for the study of the logical structure of computers. The study of types algebraic structures as mathematical objects is the object of universal algebra and category theory. The latter applies to every mathematical structure (not only the algebraic ones). At its origin, it was introduced, together with homological algebra for allowing the algebraic study of non-algebraic objects such as topological spaces; this particular area of application is called algebraic topology. Calculus and analysis Calculus, formerly called infinitesimal calculus, was introduced in the 17th century by Newton and Leibniz, independently and simultaneously. It is fundamentally the study of the relationship of two changing quantities, called variables, such that one depends on the other. Calculus was largely expanded in the 18th century by Euler, with the introduction of the concept of a function, and many other results. Presently "calculus" refers mainly to the elementary part of this theory, and "analysis" is commonly used for advanced parts. Analysis is further subdivided into real analysis, where variables represent real numbers and complex analysis where variables represent complex numbers. Presently there are many subareas of analysis, some being shared with other areas of mathematics; they include: Multivariable calculus Functional analysis, where variables represent varying functions; Integration, measure theory and potential theory, all strongly related with Probability theory; Ordinary differential equations; Partial differential equations; Numerical analysis, mainly devoted to the computation on computers of solutions of ordinary and partial differential equations that arise in many applications of mathematics. Discrete mathematics Mathematical logic and set theory These subjects belong to mathematics since the end of the 19th century. Before this period, sets were not considered as mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy, and was not specifically studied by mathematicians. Before the study of infinite sets by Georg Cantor, mathematicians were reluctant to consider collections that are actually infinite, and considered infinity as the result of an endless enumeration. Cantor's work offended many mathematicians not only by considering actually infinite sets, but also by showing that this implies different sizes of infinity (see Cantor's diagonal argument) and the existence of mathematical objects that cannot be computed, and not even be explicitly described (for example, Hamel bases of the real numbers over the rational numbers). This led to the controversy over Cantor's set theory. In the same period, it appeared in various areas of mathematics that the former intuitive definitions of the basic mathematical objects were insufficient for insuring mathematical rigour. Examples of such intuitive definitions are "a set is a collection of objects", "natural number is what is used for counting", "a point is a shape with a zero length in every direction", "a curve is a trace left by a moving point", etc. This is the origin of the foundational crisis of mathematics. It has been eventually solved in the mainstream of mathematics by systematize the axiomatic method inside a formalized set theory. Roughly speaking, each mathematical object is defined by the set of all similar objects and the properties that these objects must have. For example, in Peano arithmetic, the natural numbers are defined by "zero is a number", "each number as a unique successor", "each number but zero has a unique predecessor", and some rules of reasoning. The "nature" of the objects defined this way is a philosophical problem that mathematicians leave to philosophers, even if many mathematicians have opinions on this nature, and use their opinion—sometimes called "intuition"—to guide their study and finding proofs. This approach allows considering "logics" (that is, sets of allowed deducing rules), theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For example, Gödel's incompleteness theorems assert, roughly speaking that, in every theory that contains the natural numbers, there are theorems that are true (that is provable in a larger theory), but not provable inside the theory. This approach of the foundations of the mathematics was challenged during the first half of the 20th century by mathematicians leaded by L. E. J. Brouwer who promoted an intuitionistic logic that excludes the law of excluded middle. These problems and debates led to a wide expansion of mathematical logic, with subareas such as model theory (modeling some logical theories inside other theory), proof theory, type theory, computability theory and computational complexity theory. Although these aspects of mathematical logic were introduced before the rise of computers, their use in compiler design, program certification, proof assistants and other aspects of computer science, contributed in turn to the expansion of these logical theories. Applied mathematics Applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry. Thus, "applied mathematics" is a mathematical science with specialized knowledge. The term applied mathematics also describes the professional specialty in which mathematicians work on practical problems; as a profession focused on practical problems, applied mathematics focuses on the "formulation, study, and use of mathematical models" in science, engineering, and other areas of mathematical practice. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics, where mathematics is developed primarily for its own sake. Thus, the activity of applied mathematics is vitally connected with research in pure mathematics. Statistics and other decision sciences Applied mathematics has significant overlap with the discipline of statistics, whose theory is formulated mathematically, especially with probability theory. Statisticians (working as part of a research project) "create data that makes sense" with random sampling and with randomized experiments; the design of a statistical sample or experiment specifies the analysis of the data (before the data becomes available). When reconsidering data from experiments and samples or when analyzing data from observational studies, statisticians "make sense of the data" using the art of modelling and the theory of inference—with model selection and estimation; the estimated models and consequential predictions should be tested on new data. Statistical theory studies decision problems such as minimizing the risk (expected loss) of a statistical action, such as using a procedure in, for example, parameter estimation, hypothesis testing, and selecting the best. In these traditional areas of mathematical statistics, a statistical-decision problem is formulated by minimizing an objective function, like expected loss or cost, under specific constraints: For example, designing a survey often involves minimizing the cost of estimating a population mean with a given level of confidence. Because of its use of optimization, the mathematical theory of statistics shares concerns with other decision sciences, such as operations research, control theory, and mathematical economics. Computational mathematics Computational mathematics proposes and studies methods for solving mathematical problems that are typically too large for human numerical capacity. Numerical analysis studies methods for problems in analysis using functional analysis and approximation theory; numerical analysis broadly includes the study of approximation and discretisation with special focus on rounding errors. Numerical analysis and, more broadly, scientific computing also study non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph theory. Other areas of computational mathematics include computer algebra and symbolic computation. History The history of mathematics can be seen as an ever-increasing series of abstractions. Evolutionarily speaking, the first abstraction to ever take place, which is shared by many animals, was probably that of numbers: the realization that a collection of two apples and a collection of two oranges (for example) have something in common, namely the quantity of their members. As evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have also recognized how to count abstract quantities, like time—days, seasons, or years. Evidence for more complex mathematics does not appear until around 3000 , when the Babylonians and Egyptians began using arithmetic, algebra, and geometry for taxation and other financial calculations, for building and construction, and for astronomy. The oldest mathematical texts from Mesopotamia and Egypt are from 2000 to 1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical concept after basic arithmetic and geometry. It is in Babylonian mathematics that elementary arithmetic (addition, subtraction, multiplication, and division) first appear in the archaeological record. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, with Greek mathematics the Ancient Greeks began a systematic study of mathematics as a subject in its own right. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom, theorem, and proof. His book, Elements, is widely considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is often held to be Archimedes (c. 287–212 BC) of Syracuse. He developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. Other notable achievements of Greek mathematics are conic sections (Apollonius of Perga, 3rd century BC), trigonometry (Hipparchus |
large harbor tug serving from 1965 to 2004 SS Manhattan (1931), a luxury liner SS Manhattan (1962), a tanker constructed to pass the Northwest Passage USS Manhattan (1863), a Union Navy ship in service until 1902 Music Manhattan Records, a record label The Manhattans, an R&B group from the 1970s and 80s Manhattan (Art Farmer album), 1981, or the title song Manhattan (Jeffrey Lewis & Los Bolts album), 2015 Manhattan (Skaters album), 2014 "Manhattan" (song), a song written in 1925 by Rodgers and Hart "Manhattan", a 1973 song by C. Jérôme "Manhattan", a 2009 song by the Kings of Leon from Only by the Night Manhattan (soundtrack), to the 1979 film Film and television Manhattan (1979 film), a film by Woody Allen Manhattan (1924 film), a film starring Richard Dix Manhattan (TV series), a 2014–2016 series "Manhattan" (Once Upon a Time), a 2013 episode of Once Upon a Time Manhattan, a fictional town in Manhattan, AZ Schools Manhattan Christian College, Manhattan, Kansas Manhattan High School, Manhattan, Kansas Manhattan School, Manhattan, Nevada Manhattan School of Music, Manhattan, New York Manhattan College, Bronx, New York Buildings Manhattan Laundry, a complex of historic buildings in Washington, D.C. Manhattan Tower, an apartment building in the Park Tzameret neighborhood of Tel Aviv, Israel Other uses Manhattan (board game), a 1990s board game by Andreas Seyfarth Manhattan (cocktail), an alcoholic drink Manhattan, a 1979 novel by Neal Travis Manhattan Avenue (Brooklyn), in Williamsburg and Greenpoint, New York Manhattan Avenue (Manhattan), | Manhattan (1979 film), a film by Woody Allen Manhattan (1924 film), a film starring Richard Dix Manhattan (TV series), a 2014–2016 series "Manhattan" (Once Upon a Time), a 2013 episode of Once Upon a Time Manhattan, a fictional town in Manhattan, AZ Schools Manhattan Christian College, Manhattan, Kansas Manhattan High School, Manhattan, Kansas Manhattan School, Manhattan, Nevada Manhattan School of Music, Manhattan, New York Manhattan College, Bronx, New York Buildings Manhattan Laundry, a complex of historic buildings in Washington, D.C. Manhattan Tower, an apartment building in the Park Tzameret neighborhood of Tel Aviv, Israel Other uses Manhattan (board game), a 1990s board game by Andreas Seyfarth Manhattan (cocktail), an alcoholic drink Manhattan, a 1979 novel by Neal Travis Manhattan Avenue (Brooklyn), in Williamsburg and Greenpoint, New York Manhattan Avenue (Manhattan), in Harlem and the Upper West Side, New York Manhattan Bridge, connecting Lower Manhattan and Brooklyn Manhattan Handicap, an American Thoroughbred horse race held annually at Belmont Park in Elmont, New York Manhattan Limited, a Pennsylvania Railroad that ran between Chicago and New York City until 1971 Manhattan station, a commuter railroad station on Metra's SouthWest Service in Manhattan, Illinois Manhattan Theatre, a former Broadway theatre Manhattan Project, an operation that built the first nuclear weapons People with the surname Avro Manhattan (1914–1990), a writer best known for his criticisms of the Roman Catholic Church See also Doctor Manhattan, a character in the comic Watchmen Manhattan Building (disambiguation) Manhattan distance or Manhattan length, a distance relating to taxicab geometry |
Europe, losing control of sections of the Muslim lands. Umayyad descendants took over Al-Andalus (or Muslim Spain), the Aghlabids controlled North Africa, and the Tulunids became rulers of Egypt. By the middle of the 8th century, new trading patterns were emerging in the Mediterranean. Franks traded timber, furs, swords and slaves in return for silks and other fabrics, spices, and precious metals from the Arabs. Trade and economy The migrations and invasions of the 4th and 5th centuries disrupted trade networks around the Mediterranean. African goods stopped being imported into Europe, first disappearing from the interior and by the 7th century found only in a few cities such as Rome or Naples. By the end of the 7th century, under the impact of the Muslim conquests, African products were no longer found in Western Europe. The replacement of goods from long-range trade with local products was a trend throughout the old Roman lands that happened in the Early Middle Ages. This was especially marked in the lands that did not lie on the Mediterranean, such as northern Gaul or Britain. Non-local goods appearing in the archaeological record are usually luxury goods or metalworks. In the 7th and 8th centuries, new commercial networks were emerging in northern Europe. Goods like furs, walrus ivory and amber were delivered from the Baltic region to western Europe, contributing to the development of new trade centers in East Anglia, northern Francia and Scandinavia. Conflicts over the control of trade routes and toll stations were common in southern Scandinavia. Those who failed were to turn to raiding or settle in foreign lands. The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms. Gold continued to be minted until the end of the 7th century in 693-94 when it was replaced by silver in the Merovingian kingdom. The basic Frankish silver coin was the denarius or denier, while the Anglo-Saxon version was called a penny. From these areas, the denier or penny spread throughout Europe from 700 to 1000 AD. Copper or bronze coins were not struck, nor were gold except in Southern Europe. No silver coins denominated in multiple units were minted. Church and monasticism Christianity was a major unifying factor between Eastern and Western Europe before the Arab conquests, but the conquest of North Africa sundered maritime connections between those areas. Increasingly, the Byzantine Church differed in language, practices, and liturgy from the Western Church. The Eastern Church used Greek instead of the Western Latin. Theological and political differences emerged, and by the early and middle 8th century issues such as iconoclasm, clerical marriage, and state control of the Church had widened to the extent that the cultural and religious differences were greater than the similarities. The formal break, known as the East–West Schism, came in 1054, when the papacy and the patriarchy of Constantinople clashed over papal supremacy and excommunicated each other, which led to the division of Christianity into two Churches—the Western branch became the Roman Catholic Church and the Eastern branch the Eastern Orthodox Church. The ecclesiastical structure of the Roman Empire survived the movements and invasions in the west mostly intact, but the papacy was little regarded, and few of the Western bishops looked to the bishop of Rome for religious or political leadership. Many of the popes prior to 750 were more concerned with Byzantine affairs and Eastern theological controversies. The register, or archived copies of the letters, of Pope Gregory the Great (pope 590–604) survived, and of those more than 850 letters, the vast majority were concerned with affairs in Italy or Constantinople. The only part of Western Europe where the papacy had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity. Irish missionaries were most active in Western Europe between the 5th and the 7th centuries, going first to England and Scotland and then on to the continent. Under such monks as Columba (d. 597) and Columbanus (d. 615), they founded monasteries, taught in Latin and Greek, and authored secular and religious works. The Early Middle Ages witnessed the rise of Christian monasticism. The shape of European monasticism was determined by traditions and ideas that originated with the Desert Fathers of Egypt. Monastic ideals spread through hagiographical literature such as the Life of Anthony. Most European monasteries were of the type that focuses on community experience of the spiritual life, called cenobitism, which was pioneered by the Egyptian hermit Pachomius (d. c. 350). Bishop Basil of Caesarea (d. 379) wrote a monastic rule for a community of Cappadocian ascetics which served as a highly esteemed template for similar regulations in the Mediterranean. These mainly covered the spiritual aspects of monasticism. In contrast, the Italian monk Benedict of Nursia (d. 547) adopted a more practical approach, regulating both the administrative and spiritual responsibilities of a community of monks led by an abbot. The Benedictine Rule became widely used in western monasteries already before it was decreed the norm for Frankish monastic communities in 817. In the east, the monastic rules compiled by Theodore the Studite (d. 826) gained popularity after they were adopted in the Great Lavra, a newly established imperial monastery on Mount Athos in the 960s. The Great Lavra set a precedent for the founding of further Athonite monasteries, turning the mount into the most important centre of Orthodox monasticism. Monks and monasteries had a deep effect on the religious and political life of the Early Middle Ages, in various cases acting as land trusts for powerful families, centres of propaganda and royal support in newly conquered regions, and bases for missions and proselytisation. They were the main and sometimes only outposts of education and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries in the Early Middle Ages. Monks were also the authors of new works, including history, theology, and other subjects, written by authors such as Bede (d. 735), a native of northern England. The Byzantine missionary Constantine (d. 869) developed Old Church Slavonic as a new liturgical language enriching Slavic vocabulary with Greek religious terms. He also created an alphabet, likely the Glagolitic script, for it. These innovations established the basis for a flourishing Slavic religious literature. Carolingian Europe The Frankish kingdom in northern Gaul split into kingdoms called Austrasia, Neustria, and Burgundy during the 6th and 7th centuries, all of them ruled by the Merovingian dynasty, who were descended from Clovis. The 7th century was a tumultuous period of wars between Austrasia and Neustria. Such warfare was exploited by Pippin (d. 640), the Mayor of the Palace for Austrasia who became the power behind the Austrasian throne. Later members of his family inherited the office, acting as advisers and regents. One of his descendants, Charles Martel (d. 741), won the Battle of Poitiers in 732, halting the advance of Muslim armies across the Pyrenees. Great Britain was divided into small states dominated by the kingdoms of Northumbria, Mercia, Wessex, and East Anglia which descended from the Anglo-Saxon invaders. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and Picts. Ireland was divided into even smaller political units, usually known as tribal kingdoms, under the control of kings. There were perhaps as many as 150 local kings in Ireland, of varying importance. The Carolingian dynasty, as the successors to Charles Martel are known, officially took control of the kingdoms of Austrasia and Neustria in a coup of 753 led by (r. 752–768). A contemporary chronicle claims that Pippin sought, and gained, authority for this coup from Pope (pope 752–757). Pippin's takeover was reinforced with propaganda that portrayed the Merovingians as inept or cruel rulers, exalted the accomplishments of Charles Martel, and circulated stories of the family's great piety. At the time of his death in 768, Pippin left his kingdom in the hands of his two sons, Charles (r. 768–814) and Carloman (r. 768–771). When Carloman died of natural causes, Charles blocked the succession of Carloman's young son and installed himself as the king of the united Austrasia and Neustria. Charles, more often known as Charles the Great or Charlemagne, embarked upon a programme of systematic expansion in 774 that unified a large portion of Europe, eventually controlling modern-day France, northern Italy, and Saxony. In the wars that lasted beyond 800, he rewarded allies with war booty and command over parcels of land. In 774, Charlemagne conquered the Lombards, which freed the papacy from the fear of Lombard conquest and marked the beginnings of the Papal States. The Avars were forced into submission between 791 and 803. Their empire's fall facilitated the development of small Slavic principalities, mainly ruled by ambitious warlords under Frankish suzerainty. The coronation of Charlemagne as emperor on Christmas Day 800 is regarded as a turning point in medieval history, marking a return of the Western Roman Empire, since the new emperor ruled over much of the area previously controlled by the Western emperors. It also marks a change in Charlemagne's relationship with the Byzantine Empire, as the assumption of the imperial title by the Carolingians asserted their equivalence to the Byzantine state. In 812, as a result of careful and protracted negotiations, the Byzantines acknowledged Charlemagne's title of "emperor" but without recognizing him as a second "emperor of the Romans", or accepting his successors' claim to use his new title. The Frankish lands were rural in character, with only a few small cities. Most of the people were peasants settled on small farms. Little trade existed and much of that was with the British Isles and Scandinavia, in contrast to the older Roman Empire with its trading networks centred on the Mediterranean. The empire was administered by an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts, who administered the counties the empire had been divided into. The central administration supervised the counts through imperial emissaries called missi dominici, who served as roving inspectors and troubleshooters. The clerics of the royal chapel were responsible for recording important royal grants and decisions. Carolingian Renaissance Charlemagne's court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies. The English monk Alcuin (d. 804) was invited to Aachen and brought the education available in the monasteries of Northumbria. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule, allowing a common writing style that advanced communication across much of Europe. Charlemagne sponsored changes in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics and schoolbooks were also produced. Grammarians of the period modified the Latin language, changing it from the Classical Latin of the Roman Empire into a more flexible form to fit the needs of the Church and government. By the reign of Charlemagne, the language had so diverged from the classical Latin that it was later called Medieval Latin. Breakup of the Carolingian Empire Charlemagne planned to continue the Frankish tradition of dividing his kingdom between all his heirs, but was unable to do so as only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he crowned Louis as his successor. Louis's reign of 26 years was marked by numerous divisions of the empire among his sons and, after 829, civil wars between various alliances of father and sons over the control of various parts of the empire. Eventually, Louis recognised his eldest son (d. 855) as emperor and gave him the Kingdom of Italy. Louis divided the rest of the empire between Lothair and Charles the Bald (d. 877), his youngest son. Lothair took East Francia, comprising both banks of the Rhine and eastwards, leaving Charles West Francia with the empire to the west of the Rhineland and the Alps. Louis the German (d. 876), the middle child, who had been rebellious to the last, was allowed to keep Bavaria under the suzerainty of his elder brother. The division was disputed. of Aquitaine (d. after 864), the emperor's grandson, rebelled in a contest for Aquitaine, while Louis the German tried to annex all of East Francia. Louis the Pious died in 840, with the empire still in chaos. A three-year civil war followed his death. By the Treaty of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair to go with his lands in Italy, and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France. Charlemagne's grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion to be lost. In 987 the Carolingian dynasty was replaced in the western lands, with the crowning of Hugh Capet (r. 987–996) as king. In the eastern lands the dynasty had died out earlier, in 911, with the death of Louis the Child, and the selection of the unrelated Conrad I (r. 911–918) as king. The breakup of the Carolingian Empire was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed by the Vikings, who also raided the British Isles and settled there as well as in Iceland. In 911, the Viking chieftain Rollo (d. c. 931) received permission from the Frankish King Charles the Simple (r. 898–922) to settle in what became Normandy. This settlement eventually expanded and Normans spread to southern Italy, then Sicily and England. The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar assault until the invader's defeat at the Battle of Lechfeld in 955. The breakup of the Abbasid dynasty meant that the Islamic world fragmented into smaller political states, some of which began expanding. The Aghlabids conquered Sicily, the Ummayads of Al-Andalus annexed the Balearic Islands, and Arab pirates launched regular raids against Italy and southern France. New kingdoms and Byzantine revival Efforts by local kings to fight the invaders led to the formation of new political entities. In Anglo-Saxon England, King Alfred the Great (r. 871–899) came to an agreement with the Viking invaders in the late 9th century, resulting in Danish settlements in Northumbria, Mercia, and parts of East Anglia. By the middle of the 10th century, Alfred's successors had conquered Northumbria, and restored English control over most of the southern part of Great Britain. In northern Britain, Kenneth MacAlpin (d. c. 860) united the Picts and the Scots into the Kingdom of Alba. In the early 10th century, the Ottonian dynasty established itself in Germany, and was engaged in driving back the Magyars. Its efforts culminated in the coronation in 962 of (r. 936–973) as Holy Roman Emperor. In the mid-10th century Italy was drawn into the Ottonian sphere but the absent German kings could not consolidate royal authority in the Italian realm. The western Frankish kingdom was more fragmented, and although kings remained nominally in charge, much of the political power devolved to the local lords. In the Iberian Peninsula, Asturias expanded slowly south in the 8th and 9th centuries, and continued as the Kingdom of León when the royal centre was moved from the northern Oviedo to León in the 910s. Missionary efforts to Scandinavia during the 9th and 10th centuries helped strengthen the growth of kingdoms such as Sweden, Denmark, and Norway, which gained power and territory. Some kings converted to Christianity, although not all by 1000. Scandinavians also expanded and colonised throughout Europe. Besides the settlements in Ireland, England, and Normandy, further settlement took place in what became Russia and Iceland. Swedish traders and raiders ranged down the rivers of the Russian steppe, and even attempted to seize Constantinople in 860 and 907. The Eastern European trade routes towards Central Asia and the Near East were controlled by the Khazars. Their multiethnic empire resisted the Muslim expansion, and the Khazar leaders converted to Judaism by the 830s. The Khazars were nominally ruled by a sacred king, the khagan, but the commander-in-chief of his army, the beg, was the power behind the throne. Byzantium revived its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959), members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration to all the provinces. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r. 976–1025) to expand the frontiers of the empire on all fronts. The imperial court was the centre of a revival of classical learning, a process known as the Macedonian Renaissance. Writers such as John Geometres (fl. early 10th century) composed new hymns, poems, and other works. Missionary efforts by both Eastern and Western clergy resulted in the conversion of the Moravians, Bulgars, Bohemians, Poles, Magyars, and Slavic inhabitants of the Kievan Rus'. These conversions contributed to the founding of political states in the lands of those peoples—the states of Moravia, Bulgaria, Bohemia, Poland, Hungary, and the Kievan Rus'. Bulgaria, which was founded at the Danube Delta around 680, at its height incorporated vast regions along the Lower Danube, in the Balkans and the Carpathian Basin. By 1018, the last Bulgarian nobles had surrendered to the Byzantine Empire. Art and architecture Church building was dominated by basilicas in the Later Roman Empire. Originally serving as public meeting halls, basilicas were first used as principal venues of Christian worship under Constantine the Great. During his successors' reign, new basilicas were built in the major cities of the Roman world, and even in the post-Roman tribal kingdoms until the mid-6th century. As the spacious basilicas became of little use with the decline of urban centres, they gave way to smaller churches, mainly divided into little chambers. By the beginning of the 8th century, the Carolingian Empire revived the basilica form of architecture. One feature of the basilica is the use of a transept, or the "arms" of a cross-shaped building that are perpendicular to the long nave. Other new features of religious architecture include the crossing tower and a monumental entrance to the church, usually at the west end of the building. Carolingian art was produced for a small group of figures around the court, and the monasteries and churches they supported. It was dominated by efforts to regain the dignity and classicism of imperial Roman and Byzantine art, but was also influenced by the Insular art of the British Isles. Insular art integrated the energy of Irish Celtic and Anglo-Saxon Germanic styles of ornament with Mediterranean forms such as the book, and established many characteristics of art for the rest of the medieval period. Surviving religious works from the Early Middle Ages are mostly illuminated manuscripts and carved ivories, originally made for metalwork that has since been melted down. Objects in precious metals were the most prestigious form of art, but almost all are lost except for a few crosses such as the Cross of Lothair, several reliquaries, and finds such as the Anglo-Saxon burial at Sutton Hoo and the hoards of Gourdon from Merovingian France, Guarrazar from Visigothic Spain and Nagyszentmiklós near Byzantine territory. There are survivals from the large brooches in fibula or penannular form that were a key piece of personal adornment for elites, including the Irish Tara Brooch. Highly decorated books were mostly Gospel Books and these have survived in larger numbers, including the Insular Book of Kells, the Book of Lindisfarne, and the imperial Codex Aureus of St. Emmeram, which is one of the few to retain its "treasure binding" of gold encrusted with jewels. Charlemagne's court seems to have been responsible for the acceptance of figurative monumental sculpture in Christian art, and by the end of the period near life-sized figures such as the Gero Cross were common in important churches. Military and technological developments During the later Roman Empire, the principal military developments were attempts to create an effective cavalry force as well as the continued development of highly specialised types of troops. The creation of heavily armoured cataphract-type soldiers as cavalry was an important feature of the 5th-century Roman military. The various invading tribes had differing emphases on types of soldiers—ranging from the primarily infantry Anglo-Saxon invaders of Britain to the Vandals and Visigoths who had a high proportion of cavalry in their armies. During the early invasion period, the stirrup had not been introduced into warfare, which limited the usefulness of cavalry as shock troops because it was not possible to put the full force of the horse and rider behind blows struck by the rider. The greatest change in military affairs during the invasion period was the adoption of the Hunnic composite bow in place of the earlier, and weaker, Scythian composite bow. Another development was the increasing use of longswords and the progressive replacement of scale armour by mail armour and lamellar armour. The importance of infantry and light cavalry began to decline during the early Carolingian period, with a growing dominance of elite heavy cavalry. The use of militia-type levies of the free population declined over the Carolingian period. Although much of the Carolingian armies were mounted, a large proportion during the early period appear to have been mounted infantry, rather than true cavalry. One exception was Anglo-Saxon England, where the armies were still composed of regional levies, known as the fyrd, which were led by the local elites. In military technology, one of the main changes was the return of the crossbow, which had been known in Roman times and reappeared as a military weapon during the last part of the Early Middle Ages. Another change was the introduction of the stirrup, which increased the effectiveness of cavalry as shock troops. A technological advance that had implications beyond the military was the horseshoe, which allowed horses to be used in rocky terrain. High Middle Ages Society and economic life The High Middle Ages was a period of tremendous expansion of population. The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain unclear: improved agricultural techniques, the decline of slaveholding, a more clement climate and the lack of invasion have all been suggested. As much as 90 per cent of the European population remained rural peasants. Many were no longer settled in isolated farms but had gathered into small communities, usually known as manors or villages. These peasants were often subject to noble | of this period and into the Late Middle Ages. The Late Middle Ages was marked by difficulties and calamities including famine, plague, and war, which significantly diminished the population of Europe; between 1347 and 1350, the Black Death killed about a third of Europeans. Controversy, heresy, and the Western Schism within the Catholic Church paralleled the interstate conflict, civil strife, and peasant revolts that occurred in the kingdoms. Cultural and technological developments transformed European society, concluding the Late Middle Ages and beginning the early modern period. Terminology and periodisation The Middle Ages is one of the three major periods in the most enduring scheme for analysing European history: classical civilisation or Antiquity, the Middle Ages and the Modern Period. The "Middle Ages" first appears in Latin in 1469 as media tempestas or "middle season". In early usage, there were many variants, including medium aevum, or "middle age", first recorded in 1604, and media saecula, or "middle centuries", first recorded in 1625. The adjective "medieval" (or sometimes "mediaeval" or "mediæval"), meaning pertaining to the Middle Ages, derives from medium aevum. Medieval writers divided history into periods such as the "Six Ages" or the "Four Empires", and considered their time to be the last before the end of the world. When referring to their own times, they spoke of them as being "modern". In the 1330s, the Italian humanist and poet Petrarch referred to pre-Christian times as antiqua (or "ancient") and to the Christian period as nova (or "new"). Petrarch regarded the post-Roman centuries as "dark" compared to the "light" of classical antiquity. Leonardo Bruni was the first historian to use tripartite periodisation in his History of the Florentine People (1442), with a middle period "between the fall of the Roman Empire and the revival of city life sometime in late eleventh and twelfth centuries". Tripartite periodisation became standard after the 17th-century German historian Christoph Cellarius divided history into three periods: ancient, medieval, and modern. The most commonly given starting point for the Middle Ages is around 500, with the date of 476 first used by Bruni. Later starting dates are sometimes used in the outer parts of Europe. For Europe as a whole, 1500 is often considered to be the end of the Middle Ages, but there is no universally agreed upon end date. Depending on the context, events such as the conquest of Constantinople by the Turks in 1453, Christopher Columbus's first voyage to the Americas in 1492, or the Protestant Reformation in 1517 are sometimes used. English historians often use the Battle of Bosworth Field in 1485 to mark the end of the period. For Spain, dates commonly used are the death of King Ferdinand II in 1516, the death of Queen Isabella I of Castile in 1504, or the conquest of Granada in 1492. Historians from Romance-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low" period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into three intervals: "Early", "High", and "Late". In the 19th century, the entire Middle Ages were often referred to as the "Dark Ages", but with the adoption of these subdivisions, use of this term was restricted to the Early Middle Ages, at least among historians. Later Roman Empire The Roman Empire reached its greatest territorial extent during the 2nd century AD; the following two centuries witnessed the slow decline of Roman control over its outlying territories. Economic issues, including inflation, and external pressure on the frontiers combined to create the Crisis of the Third Century, with emperors coming to the throne only to be rapidly replaced by new usurpers. Military expenses increased steadily during the 3rd century, mainly in response to the war with the Sasanian Empire, which revived in the middle of the 3rd century. The army doubled in size, and cavalry and smaller units replaced the Roman legion as the main tactical unit. The need for revenue led to increased taxes and a decline in numbers of the curial, or landowning, class, and decreasing numbers of them willing to shoulder the burdens of holding office in their native towns. More bureaucrats were needed in the central administration to deal with the needs of the army, which led to complaints from civilians that there were more tax-collectors in the empire than tax-payers. The Emperor Diocletian (r. 284–305) split the empire into separately administered eastern and western halves in 286; the empire was not considered divided by its inhabitants or rulers, as legal and administrative promulgations in one division were considered valid in the other. In 330, after a period of civil war, Constantine the Great (r. 306–337) refounded the city of Byzantium as the newly renamed eastern capital, Constantinople. Diocletian's reforms strengthened the governmental bureaucracy, reformed taxation, and strengthened the army, which bought the empire time but did not resolve the problems it was facing: excessive taxation, a declining birthrate, and pressures on its frontiers, among others. Civil war between rival emperors became common in the middle of the 4th century, diverting soldiers from the empire's frontier forces and allowing invaders to encroach. For much of the 4th century, Roman society stabilised in a new form that differed from the earlier classical period, with a widening gulf between the rich and poor, and a decline in the vitality of the smaller towns. Another change was the Christianisation, or conversion of the empire to Christianity. The process was stimulated by the 3rd-century crisis, and by the end of the next century Christianity emerged as the empire's dominant religion. In 376, the Goths, fleeing from the Huns, received permission from Emperor Valens (r. 364–378) to settle in the Roman province of Thracia in the Balkans. The settlement did not go smoothly, and when Roman officials mishandled the situation, the Goths began to raid and plunder. Valens, attempting to put down the disorder, was killed fighting the Goths at the Battle of Adrianople on 9 August 378. In addition to the threat from such tribal confederacies in the north, internal divisions within the empire, especially within the Christian Church, caused problems. In 400, the Visigoths invaded the Western Roman Empire and, although briefly forced back from Italy, in 410 sacked the city of Rome. In 406 the Alans, Vandals, and Suevi crossed into Gaul; over the next three years they spread across Gaul and in 409 crossed the Pyrenees Mountains into modern-day Spain. The Migration Period began, when various peoples, initially largely Germanic peoples, moved across Europe. The Franks, Alemanni, and the Burgundians all ended up in Gaul while the Angles, Saxons, and Jutes settled in Britain, and the Vandals went on to cross the strait of Gibraltar after which they conquered the province of Africa. In the 430s the Huns began invading the empire; their king Attila (r. 434–453) led invasions into the Balkans in 442 and 447, Gaul in 451, and Italy in 452. The Hunnic threat remained until Attila's death in 453, when the Hunnic confederation he led fell apart. These invasions by the tribes completely changed the political and demographic nature of what had been the Western Roman Empire. By the end of the 5th century the western section of the empire was divided into smaller political units, ruled by the tribes that had invaded in the early part of the century. The deposition of the last emperor of the west, Romulus Augustulus, in 476 has traditionally marked the end of the Western Roman Empire. The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its western counterpart, had little ability to assert control over the lost western territories. The Byzantine emperors maintained a claim over the territory, but while none of the new kings in the west dared to elevate himself to the position of emperor of the west, Byzantine control of most of the Western Empire could not be sustained. Early Middle Ages New realms The political structure of Western Europe changed with the end of the united Roman Empire. Although the movements of peoples during this period are usually described as "invasions", they were not just military expeditions but migrations of entire peoples into the empire. When dealing with the migrations, the eastern and western elites applied different methods. The Eastern Romans combined the deployment of armed forces with gifts and grants of offices to the tribal leaders. The Western Roman aristocrats failed to support the army but refused to pay tribute to prevent barbarian invasions. The emperors of the 5th century were often controlled by military strongmen such as Stilicho (d. 408), Aetius (d. 454), Aspar (d. 471), Ricimer (d. 472), or Gundobad (d. 516), who were partly or fully of non-Roman background. When the line of Western emperors ceased, many of the kings who replaced them were from the same background. Intermarriage between the new kings and the Roman elites was common. This led to a fusion of Roman culture with the customs of the invading tribes, including the popular assemblies that allowed free male tribal members more say in political matters than was common in the Roman state. Material artefacts left by the Romans and the invaders are often similar, and tribal items were often modelled on Roman objects. Much of the scholarly and written culture of the new kingdoms was also based on Roman intellectual traditions. An important difference was the gradual loss of tax revenue by the new polities. Many of the new political entities no longer supported their armies through taxes, instead relying on granting them land or rents. This meant there was less need for large tax revenues and so the taxation systems decayed. Christian ethics brought about significant changes in the position of slaves in the 7th and 8th centuries. They were no more regarded as their lords' property, and their right to a decent treatment was enacted. Between the 5th and 8th centuries, new peoples and individuals filled the political void left by Roman centralised government. The Ostrogoths, a Gothic tribe settled in Roman Italy in the late 5th century under Theoderic the Great (d. 526) and set up a kingdom marked by its co-operation between the Italians and the Ostrogoths, at least until the last years of Theodoric's reign. The Burgundians settled in Gaul, and after an earlier realm was destroyed by the Huns in 436, formed a new kingdom in the 440s. Between today's Geneva and Lyon, it grew to become the realm of Burgundy in the late 5th and early 6th centuries. Elsewhere in Gaul, the Franks and Celtic Britons set up small polities. Francia was centred in northern Gaul, and the first king of whom much is known is Childeric I (d. 481). His grave was discovered in 1653 and is remarkable for its grave goods, which included weapons and a large quantity of gold. Under Childeric's son Clovis I (r. 509–511), the founder of the Merovingian dynasty, the Frankish kingdom expanded and converted to Christianity. The Britons, related to the natives of Britannia – modern-day Great Britain – settled in what is now Brittany. Other monarchies were established by the Visigothic Kingdom in the Iberian Peninsula, the Suebi in northwestern Iberia, and the Vandal Kingdom in North Africa. In the 6th century, the Lombards settled in Northern Italy, replacing the Ostrogothic kingdom with a grouping of duchies that occasionally selected a king to rule over them all. By the late 6th century, this arrangement had been replaced by a permanent monarchy, the Kingdom of the Lombards. During the invasions, some regions received a larger influx of new peoples than others. In Gaul for instance, the invaders settled much more extensively in the north-east than in the south-west. Slavs settled in Central and Eastern Europe and the Balkan Peninsula. The settlement of peoples was accompanied by changes in languages. Latin, the literary language of the Western Roman Empire, was gradually replaced by vernacular languages which evolved from Latin, but were distinct from it, collectively known as Romance languages. These changes from Latin to the new languages took many centuries. Greek remained the language of the Byzantine Empire, but the migrations of the Slavs expanded the area of Slavic languages in Eastern Europe. Byzantine survival As Western Europe witnessed the formation of new kingdoms, the Eastern Roman Empire remained intact and experienced an economic revival that lasted into the early 7th century. There were fewer invasions of the eastern section of the empire; most occurred in the Balkans. Peace with the Sasanian Empire, the traditional enemy of Rome, lasted throughout most of the 5th century. The Eastern Empire was marked by closer relations between the political state and Christian Church, with doctrinal matters assuming an importance in Eastern politics that they did not have in Western Europe. Legal developments included the codification of Roman law; the first effort—the Codex Theodosianus—was completed in 438. Under Emperor Justinian (r. 527–565), another compilation took place—the Corpus Juris Civilis. Justinian oversaw the construction of the Hagia Sophia in Constantinople and the reconquest of North Africa from the Vandals and Italy from the Ostrogoths, under Belisarius (d. 565). The conquest of Italy was not complete, as a deadly outbreak of plague in 542 led to the rest of Justinian's reign concentrating on defensive measures rather than further conquests. At the Emperor's death, the Byzantines had control of most of Italy, North Africa, and a small foothold in southern Spain. Justinian's reconquests and excessive building program have been criticised by historians for bringing his realm to the brink of bankruptcy, but many of the difficulties faced by Justinian's successors were likely due to other factors, including the plague. In the Eastern Empire the slow infiltration of the Balkans by the Slavs added a further difficulty for Justinian's successors. It began gradually, but by the late 540s Slavic tribes were in Thrace and Illyrium, and had defeated an imperial army near Adrianople in 551. Most Slavic, Turkic and Germanic tribes inhabiting the lowlands along the Lower and Middle Danube were conquered by the nomadic Avars in the 560s. Coming from the Central Asian steppes, they initially fought in Byzantine pay, but by the end of the 6th century, they were the dominant power in Central Europe and routinely able to force the Eastern emperors to pay tribute. An additional problem to face the empire came as a result of the involvement of Emperor Maurice (r. 582–602) in Persian politics when he intervened in a succession dispute. This led to a period of peace, but when Maurice was overthrown, the Persians invaded and during the reign of Emperor Heraclius (r. 610–641) controlled large chunks of the empire, including Egypt, Syria, and Anatolia until Heraclius' successful counterattack. In 628 the empire secured a peace treaty and recovered all of its lost territories. Western society In Western Europe, some of the older Roman elite families died out while others became more involved with ecclesiastical than secular affairs. Values attached to Latin scholarship and education mostly disappeared, and while literacy remained important, it became a practical skill rather than a sign of elite status. In the 4th century, Jerome (d. 420) dreamed that God rebuked him for spending more time reading Cicero than the Bible. By the 6th century, Gregory of Tours (d. 594) had a similar dream, but instead of being chastised for reading Cicero, he was chastised for learning shorthand. By the late 6th century, the principal means of religious instruction in the Church had become music and art rather than the book. Most intellectual efforts went towards imitating classical scholarship, but some original works were created, along with now-lost oral compositions. The writings of Sidonius Apollinaris (d. 489), Cassiodorus (d. ), and Boethius (d. c. 525) were typical of the age. Changes also took place among laymen, as aristocratic culture focused on great feasts held in halls rather than on literary pursuits. Clothing for the elites was richly embellished with jewels and gold. Lords and kings supported entourages of fighters who formed the backbone of the military forces. Family ties within the elites were important, as were the virtues of loyalty, courage, and honour. These ties led to the prevalence of the feud in aristocratic society, examples of which included those related by Gregory of Tours that took place in Merovingian Gaul. Most feuds seem to have ended quickly with the payment of some sort of compensation. Women took part in aristocratic society mainly in their roles as wives and mothers of men, with the role of mother of a ruler being especially prominent in Merovingian Gaul. In Anglo-Saxon society the lack of many child rulers meant a lesser role for women as queen mothers, but this was compensated for by the increased role played by abbesses of monasteries. Only in Italy does it appear that women were always considered under the protection and control of a male relative. Peasant society is much less documented than the nobility. Most of the surviving information available to historians comes from archaeology; few detailed written records documenting peasant life remain from before the 9th century. Most of the descriptions of the lower classes come from either law codes or writers from the upper classes. Landholding patterns in the West were not uniform; some areas had greatly fragmented landholding patterns, but in other areas large contiguous blocks of land were the norm. These differences allowed for a wide variety of peasant societies, some dominated by aristocratic landholders and others having a great deal of autonomy. Land settlement also varied greatly. Some peasants lived in large settlements that numbered as many as 700 inhabitants. Others lived in small groups of a few families and still others lived on isolated farms spread over the countryside. There were also areas where the pattern was a mix of two or more of those systems. Unlike in the late Roman period, there was no sharp break between the legal status of the free peasant and the aristocrat, and it was possible for a free peasant's family to rise into the aristocracy over several generations through military service to a powerful lord. Roman city life and culture changed greatly in the early Middle Ages. Although Italian cities remained inhabited, they contracted significantly in size. Rome, for instance, shrank from a population of hundreds of thousands to around 30,000 by the end of the 6th century. Roman temples were converted into Christian churches and city walls remained in use. In Northern Europe, cities also shrank, while civic monuments and other public buildings were raided for building materials. The establishment of new kingdoms often meant some growth for the towns chosen as capitals. Although there had been Jewish communities in many Roman cities, the Jews suffered periods of persecution after the conversion of the empire to Christianity. Officially they were tolerated, if subject to conversion efforts, and at times were even encouraged to settle in new areas. Rise of Islam Religious beliefs were in flux in the lands along the Eastern Roman and Persian frontiers during the late 6th and early 7th centuries. State-sponsored Christian missionaries proselytised among the pagan steppe peoples, and the Persians made attempts to enforce their Zoroastrianism on the Christian Armenians. Judaism was an active proselytising faith, and at least one Arab political leader converted to it. The emergence of Islam in Arabia during the lifetime of Muhammad (d. 632) brought about more radical changes. After his death, Islamic forces conquered much of the Near East, starting with Syria in 634–35, continuing with Persia between 637 and 642, and reaching Egypt in 640–41. In the eastern Mediterranean, the Muslim expansion was halted at Constantinople. The Eastern Romans used the Greek Fire, a highly combustible liquid, to defend their capital in 674–78 and 717–18. In the west, the advance of Islamic troops continued. They conquered North Africa by the early 8th century, annihilated the Visigothic Kingdom in 711, and invaded southern France in 713–25. The Muslim conquerors bypassed the mountainous northwestern region of the Iberian Peninsula. Here a small kingdom, Asturias emerged as the centre of local resistance. The defeat of Muslim forces at the Battle of Tours in 732 led to the reconquest of southern France by the Franks, but the main reason for the halt of Islamic growth in Europe was the overthrow of the Umayyad Caliphate and its replacement by the Abbasid Caliphate. The Abbasids moved their capital to Baghdad and were more concerned with the Middle East than Europe, losing control of sections of the Muslim lands. Umayyad descendants took over Al-Andalus (or Muslim Spain), the Aghlabids controlled North Africa, and the Tulunids became rulers of Egypt. By the middle of the 8th century, new trading patterns were emerging in the Mediterranean. Franks traded timber, furs, swords and slaves in return for silks and other fabrics, spices, and precious metals from the Arabs. Trade and economy The migrations and invasions of the 4th and 5th centuries disrupted trade networks around the Mediterranean. African goods stopped being imported into Europe, first disappearing from the interior and by the 7th century found only in a few cities such as Rome or Naples. By the end of the 7th century, under the impact of the Muslim conquests, African products were no longer found in Western Europe. The replacement of goods from long-range trade with local products was a trend throughout the old Roman lands that happened in the Early Middle Ages. This was especially marked in the lands that did not lie on the Mediterranean, such as northern Gaul or Britain. Non-local goods appearing in the archaeological record are usually luxury goods or metalworks. In the 7th and 8th centuries, new commercial networks were emerging in northern Europe. Goods like furs, walrus ivory and amber were delivered from the Baltic region to western Europe, contributing to the development of new trade centers in East Anglia, northern Francia and Scandinavia. Conflicts over the control of trade routes and toll stations were common in southern Scandinavia. Those who failed were to turn to raiding or settle in foreign lands. The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms. Gold continued to be minted until the end of the 7th century in 693-94 when it was replaced by silver in the Merovingian kingdom. The basic Frankish silver coin was the denarius or denier, while the Anglo-Saxon version was called a penny. From these areas, the denier or penny spread throughout Europe from 700 to 1000 AD. Copper or bronze coins were not struck, nor were gold except in Southern Europe. No silver coins denominated in multiple units were minted. Church and monasticism Christianity was a major unifying factor between Eastern and Western Europe before the Arab conquests, but the conquest of North Africa sundered maritime connections between those areas. Increasingly, the Byzantine Church differed in language, practices, and liturgy from the Western Church. The Eastern Church used Greek instead of the Western Latin. Theological and political differences emerged, and by the early and middle 8th century issues such as iconoclasm, clerical marriage, and state control of the Church had widened to the extent that the cultural and religious differences were greater than the similarities. The formal break, known as the East–West Schism, came in 1054, when the papacy and the patriarchy of Constantinople clashed over papal supremacy and excommunicated each other, which led to the division of Christianity into two Churches—the Western branch became the Roman Catholic Church and the Eastern branch the Eastern Orthodox Church. The ecclesiastical structure of the Roman Empire survived the movements and invasions in the west mostly intact, but the papacy was little regarded, and few of the Western bishops looked to the bishop of Rome for religious or political leadership. Many of the popes prior to 750 were more concerned with Byzantine affairs and Eastern theological controversies. The register, or archived copies of the letters, of Pope Gregory the Great (pope 590–604) survived, and of those more than 850 letters, the vast majority were concerned with affairs in Italy or Constantinople. The only part of Western Europe where the papacy had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity. Irish missionaries were most active in Western Europe between the 5th and the 7th centuries, going first to England and Scotland and then on to the continent. Under such monks as Columba (d. 597) and Columbanus (d. 615), they founded monasteries, taught in Latin and Greek, and authored secular and religious works. The Early Middle Ages witnessed the rise of Christian monasticism. The shape of European monasticism was determined by traditions and ideas that originated with the Desert Fathers of Egypt. Monastic ideals spread through hagiographical literature such as the Life of Anthony. Most European monasteries were of the type that focuses on community experience of the spiritual life, called cenobitism, which was pioneered by the Egyptian hermit Pachomius (d. c. 350). Bishop Basil of Caesarea (d. 379) wrote a monastic rule for a community of Cappadocian ascetics which served as a highly esteemed template for similar regulations in the Mediterranean. These mainly covered the spiritual aspects of monasticism. In contrast, the Italian monk Benedict of Nursia (d. 547) adopted a more practical approach, regulating both the administrative and spiritual responsibilities of a community of monks led by an abbot. The Benedictine Rule became widely used in western monasteries already before it was decreed the norm for Frankish monastic communities in 817. In the east, the monastic rules compiled by Theodore the Studite (d. 826) gained popularity after they were adopted in the Great Lavra, a newly established imperial monastery on Mount Athos in the 960s. The Great Lavra set a precedent for the founding of further Athonite monasteries, turning the mount into the most important centre of Orthodox monasticism. Monks and monasteries had a deep effect on the religious and political life of the Early Middle Ages, in various cases acting as land trusts for powerful families, centres of propaganda and royal support in newly conquered regions, and bases for missions and proselytisation. They were the main and sometimes only outposts of education and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries in the Early Middle Ages. Monks were also the authors of new works, including history, theology, and other subjects, written by authors such as Bede (d. 735), a native of northern England. The Byzantine missionary Constantine (d. 869) developed Old Church Slavonic as a new liturgical language enriching Slavic vocabulary with Greek religious terms. He also created an alphabet, likely the Glagolitic script, for it. These innovations established the basis for a flourishing Slavic religious literature. Carolingian Europe The Frankish kingdom in northern Gaul split into kingdoms called Austrasia, Neustria, and Burgundy during the 6th and 7th centuries, all of them ruled by the Merovingian dynasty, who were descended from Clovis. The 7th century was a tumultuous period of wars between Austrasia and Neustria. Such warfare was exploited by Pippin (d. 640), the Mayor of the Palace for Austrasia who became the power behind the Austrasian throne. Later members of his family inherited the office, acting as advisers and regents. One of his descendants, Charles Martel (d. 741), won the Battle of Poitiers in 732, halting the advance of Muslim armies across the Pyrenees. Great Britain was divided into small states dominated by the kingdoms of Northumbria, Mercia, Wessex, and East Anglia which descended from the Anglo-Saxon invaders. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and Picts. Ireland was divided into even smaller political units, usually known as tribal kingdoms, under the control of kings. There were perhaps as many as 150 local kings in Ireland, of varying importance. The Carolingian dynasty, as the successors to Charles Martel are known, officially took control of the kingdoms of Austrasia and Neustria in a coup of 753 led by (r. 752–768). A contemporary chronicle claims that Pippin sought, and gained, authority for this coup from Pope (pope 752–757). Pippin's takeover was reinforced with propaganda that portrayed the Merovingians as inept or cruel rulers, exalted the accomplishments of Charles Martel, and circulated stories of the family's great piety. At the time of his death in 768, Pippin left his kingdom in the hands of his two sons, Charles (r. 768–814) and Carloman (r. 768–771). When Carloman died of natural causes, Charles blocked the succession of Carloman's young son and installed himself as the king of the united Austrasia and Neustria. Charles, more often known as Charles the Great or Charlemagne, embarked upon a programme of systematic expansion in 774 that unified a large portion of Europe, eventually controlling modern-day France, northern Italy, and Saxony. In the wars that lasted beyond 800, he rewarded allies with war booty and command over parcels of land. In 774, Charlemagne conquered the Lombards, which freed the papacy from the fear of Lombard conquest and marked the beginnings of the Papal States. The Avars were forced into submission between 791 and 803. Their empire's fall facilitated the development of small Slavic principalities, mainly ruled by ambitious warlords under Frankish suzerainty. The coronation of Charlemagne as emperor on Christmas Day 800 is regarded as a turning point in medieval history, marking a return of the Western Roman Empire, since the new emperor ruled over much of the area previously controlled by the Western emperors. It also marks a change in Charlemagne's relationship with the Byzantine Empire, as the assumption of the imperial title by the Carolingians asserted their equivalence to the Byzantine state. In 812, as a result of careful and protracted negotiations, the Byzantines acknowledged Charlemagne's title of "emperor" but without recognizing him as a second "emperor of the Romans", or accepting his successors' claim to use his new title. The Frankish lands were rural in character, with only a few small cities. Most of the people were peasants settled on small farms. Little trade existed and much of that was with the British Isles and Scandinavia, in contrast to the older Roman Empire with its trading networks centred on the Mediterranean. The empire was administered by an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts, who administered the counties the empire had been divided into. The central administration supervised the counts through imperial emissaries called missi dominici, who served as roving inspectors and troubleshooters. The clerics of the royal chapel were responsible for recording important royal grants and decisions. Carolingian Renaissance Charlemagne's court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies. The English monk Alcuin (d. 804) was invited to Aachen and brought the education available in the monasteries of Northumbria. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule, allowing a common writing style that advanced communication across much of Europe. Charlemagne sponsored changes in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics and schoolbooks were also produced. Grammarians of the period modified the Latin language, changing it from the Classical Latin of the Roman Empire into a more flexible form to fit the needs of the Church and government. By the reign of Charlemagne, the language had so diverged from the classical Latin that it was later called Medieval Latin. Breakup of the Carolingian Empire Charlemagne planned to continue the Frankish tradition of dividing his kingdom between all his heirs, but was unable to do so as only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he crowned Louis as his successor. Louis's reign of 26 years was marked by numerous divisions of the empire among his sons and, after 829, civil wars between various alliances of father and sons over the control of various parts of the empire. Eventually, Louis recognised his eldest son (d. 855) as emperor and gave him the Kingdom of Italy. Louis divided the rest of the empire between Lothair and Charles the Bald (d. 877), his youngest son. Lothair took East Francia, comprising both banks of the Rhine and eastwards, leaving Charles West Francia with the empire to the west of the Rhineland and the Alps. Louis the German (d. 876), the middle child, who had been rebellious to the last, was allowed to keep Bavaria under the suzerainty of his elder brother. The division was disputed. of Aquitaine (d. after 864), the emperor's grandson, rebelled in a contest for Aquitaine, while Louis the German tried to annex all of East Francia. Louis the Pious died in 840, with the empire still in chaos. A three-year civil war followed his death. By the Treaty of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair to go with his lands in Italy, and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France. Charlemagne's grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion to be lost. In 987 the Carolingian dynasty was replaced in the western lands, with the crowning of Hugh Capet (r. 987–996) as king. In the eastern lands the dynasty had died out earlier, in 911, with the death of Louis the Child, and the selection of the unrelated Conrad I (r. 911–918) as king. The breakup of the Carolingian Empire was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed by the Vikings, who also raided the British Isles and settled there as well as in Iceland. In 911, the Viking chieftain Rollo (d. c. 931) received permission from the Frankish King Charles the Simple (r. 898–922) to settle in what became Normandy. This settlement eventually expanded and Normans spread to southern Italy, then Sicily and England. The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar assault until the invader's defeat at the Battle of Lechfeld in 955. The breakup of the Abbasid dynasty meant that the Islamic world fragmented into smaller political states, some of which began expanding. The Aghlabids conquered Sicily, the Ummayads of Al-Andalus annexed the Balearic Islands, and Arab pirates launched regular raids against Italy and southern France. New kingdoms and Byzantine revival Efforts by local kings to fight the invaders led to the formation of new political entities. In Anglo-Saxon England, King Alfred the Great (r. 871–899) came to an agreement with the Viking invaders in the late 9th century, resulting in Danish settlements in Northumbria, Mercia, and parts of East Anglia. By the middle of the 10th century, Alfred's successors had conquered Northumbria, and restored English control over most of the southern part of Great Britain. In northern Britain, Kenneth MacAlpin (d. c. 860) united the Picts and the Scots into the Kingdom of Alba. In the early 10th century, the Ottonian dynasty established itself in Germany, and was engaged in driving back the Magyars. Its efforts culminated in the coronation in 962 of (r. 936–973) as Holy Roman Emperor. In the mid-10th century Italy was drawn into the Ottonian sphere but the absent German kings could not consolidate royal authority in the Italian realm. The western Frankish kingdom was more fragmented, and although kings remained nominally in charge, much of the political power devolved to the local lords. In the Iberian Peninsula, Asturias expanded slowly south in the 8th and 9th centuries, and continued as the Kingdom of León when the royal centre was moved from the northern Oviedo to León in the 910s. Missionary efforts to Scandinavia during the 9th and 10th centuries helped strengthen the growth of kingdoms such as Sweden, Denmark, and Norway, which gained power and territory. Some kings converted to Christianity, although not all by 1000. Scandinavians also expanded and colonised throughout Europe. Besides the settlements in Ireland, England, and Normandy, further settlement took place in what became Russia and Iceland. Swedish traders and raiders ranged down the rivers of the Russian steppe, and even attempted to seize Constantinople in 860 and 907. The Eastern European trade routes towards Central Asia and the Near East were controlled by the Khazars. Their multiethnic empire resisted the Muslim expansion, and the Khazar leaders converted to Judaism by the 830s. The Khazars were nominally ruled by a sacred king, the khagan, but the commander-in-chief of his army, the beg, was the power behind the throne. Byzantium revived its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959), members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration to all the provinces. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r. 976–1025) to expand the frontiers of the empire on all fronts. The imperial court was the centre of a revival of classical learning, a process known as the Macedonian Renaissance. Writers such as John Geometres (fl. early 10th century) composed new hymns, poems, and other works. |
For the case of unimodal distributions, one can achieve a sharper bound on the distance between the median and the mean: . A similar relation holds between the median and the mode: Jensen's inequality for medians Jensen's inequality states that for any random variable X with a finite expectation E[X] and for any convex function f This inequality generalizes to the median as well. We say a function is a C function if, for any t, is a closed interval (allowing the degenerate cases of a single point or an empty set). Every convex function is a C function, but the reverse does not hold. If f is a C function, then If the medians are not unique, the statement holds for the corresponding suprema. Medians for samples The sample median Efficient computation of the sample median Even though comparison-sorting n items requires operations, selection algorithms can compute the th-smallest of items with only operations. This includes the median, which is the th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics). Selection algorithms still have the downside of requiring memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion: if is the sample laid out as an array, and , then The remedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample. Sampling distribution The distributions of both the sample mean and the sample median were determined by Laplace. The distribution of the sample median from a population with a density function is asymptotically normal with mean and variance where is the median of and is the sample size. A modern proof follows below. Laplace's result is now understood as a special case of the asymptotic distribution of arbitrary quantiles. For normal samples, the density is , thus for large samples the variance of the median equals (See also section #Efficiency below.) Derivation of the asymptotic distribution We take the sample size to be an odd number and assume our variable continuous; the formula for the case of discrete variables is given below in . The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilities , and . For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the point directly from the trinomial distribution: . Now we introduce the beta function. For integer arguments and , this can be expressed as . Also, recall that . Using these relationships and setting both and equal to allows the last expression to be written as Hence the density function of the median is a symmetric beta distribution pushed forward by . Its mean, as we would expect, is 0.5 and its variance is . By the chain rule, the corresponding variance of the sample median is . The additional 2 is negligible in the limit. Empirical local density In practice, the functions and are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression for ; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: Here, i is the number of points strictly less than the median and k the number strictly greater. Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics. The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error. Estimation of variance from sample data The value of —the asymptotic value of where is the population median—has been studied by several authors. The standard "delete one" jackknife method produces inconsistent results. An alternative—the "delete k" method—where grows with the sample size has been shown to be asymptotically consistent. This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent, but converges very slowly (order of ). Other methods have been proposed but their behavior may differ between large and small samples. Efficiency The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size from the normal distribution, the efficiency for large N is The efficiency tends to as tends to infinity. In other words, the relative variance of the median will be , or 57% greater than the variance of the mean – the relative standard error of the median will be , or 25% greater than the standard error of the mean, (see also section #Sampling distribution above.). Other estimators For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median. If data are represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution. Multivariate median Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one. Marginal median The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen. Geometric median The geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. In contrast to the marginal median, the geometric median is equivariant with respect to Euclidean similarity transformations such as translations and rotations. Median in all directions If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions". This concept is relevant to voting theory on account of the median voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions). Centerpoint An alternative generalization of the median in higher dimensions is the centerpoint. Other median-related concepts Interpolated median When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is a Likert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the median is 3 since the median is the smallest value of for which is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval width to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the values are known, the interpolated median can be calculated from Alternatively, if in an observed sample there are scores above the median category, scores in it and scores below it then the interpolated median is given by Pseudo-median For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median. The Hodges–Lehmann estimator has been generalized to multivariate distributions. Variants of regression The Theil–Sen estimator is a method for robust linear regression based on finding medians of slopes. Median filter The median filter is an important tool of image processing, that can effectively remove any salt and pepper noise from grayscale images. Cluster analysis In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians. Median–median line This is a method of robust regression. The idea dates back to Wald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter : a left half with values less than the median and a right half with values greater than the median. He suggested taking the means of the dependent and independent | If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the arithmetic mean of the two middle values. For example, this data set of 8 numbers 1, 2, 3, 4, 5, 6, 8, 9 has a median value of 4.5, that is . (In more technical terms, this interprets the median as the fully trimmed mid-range). In general, with this convention, the median can be defined as follows: For a data set of elements, ordered from smallest to greatest, if is odd, if is even, Formal definition Formally, a median of a population is any value such that at most half of the population is less than the proposed median and at most half is greater than the proposed median. As seen above, medians may not be unique. If each set contains less than half the population, then some of the population is exactly equal to the unique median. The median is well-defined for any ordered (one-dimensional) data, and is independent of any distance metric. The median can thus be applied to classes which are ranked but not numerical (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between classes if there is an even number of cases. A geometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid. There is no widely accepted standard notation for the median, but some authors represent the median of a variable x either as x͂ or as μ1/2 sometimes also M. In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is a special case of other ways of summarizing the typical values associated with a statistical distribution: it is the 2nd quartile, 5th decile, and 50th percentile. Uses The median can be used as a measure of location when one attaches reduced importance to extreme values, typically because a distribution is skewed, extreme values are not known, or outliers are untrustworthy, i.e., may be measurement/transcription errors. For example, consider the multiset 1, 2, 2, 2, 3, 14. The median is 2 in this case, (as is the mode), and it might be seen as a better indication of the center than the arithmetic mean of 4, which is larger than all-but-one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see below. As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated. Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation. For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient when — and only when — data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions. Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean. Probability distributions For any real-valued probability distribution with cumulative distribution function F, a median is defined as any real number m that satisfies the inequalities . An equivalent phrasing uses a random variable X distributed according to F: Note that this definition does not require X to have an absolutely continuous distribution (which has a probability density function ƒ), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies . Any probability distribution on R has at least one median, but in pathological cases there may be more than one median: if F is constant 1/2 on an interval (so that ƒ=0 there), then any value of that interval is a median. Medians of particular distributions The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the Cauchy distribution: The median of a symmetric unimodal distribution coincides with the mode. The median of a symmetric distribution which possesses a mean μ also takes the value μ. The median of a normal distribution with mean μ and variance σ2 is μ. In fact, for a normal distribution, mean = median = mode. The median of a uniform distribution in the interval [a, b] is (a + b) / 2, which is also the mean. The median of a Cauchy distribution with location parameter x0 and scale parameter y is x0, the location parameter. The median of a power law distribution x−a, with exponent a > 1 is 21/(a − 1)xmin, where xmin is the minimum value for which the power law holds The median of an exponential distribution with rate parameter λ is the natural logarithm of 2 divided by the rate parameter: λ−1ln 2. The median of a Weibull distribution with shape parameter k and scale parameter λ is λ(ln 2)1/k. Populations Optimality property The mean absolute error of a real variable c with respect to the random variable X is Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X. In particular, m is a sample median if and only if m minimizes the arithmetic mean of the absolute deviations. More generally, a median is defined as a minimum of as discussed below in the section on multivariate medians (specifically, the spatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering. Inequality relating means and medians If the distribution has finite variance, then the |
forwards from the pelvis. These are not found in any modern placental, but they are found in marsupials, monotremes, other nontherian mammals and Ukhaatherium, an early Cretaceous animal in the eutherian order Asioryctitheria. This also applies to the multituberculates. They are apparently an ancestral feature, which subsequently disappeared in the placental lineage. These epipubic bones seem to function by stiffening the muscles during locomotion, reducing the amount of space being presented, which placentals require to contain their fetus during gestation periods. A narrow pelvic outlet indicates that the young were very small at birth and therefore pregnancy was short, as in modern marsupials. This suggests that the placenta was a later development. One of the earliest known monotremes was Teinolophos, which lived about 120 million years ago in Australia. Monotremes have some features which may be inherited from the original amniotes such as the same orifice to urinate, defecate and reproduce (cloaca)—as lizards and birds also do— and they lay eggs which are leathery and uncalcified. Earliest appearances of features Hadrocodium, whose fossils date from approximately 195 million years ago, in the early Jurassic, provides the first clear evidence of a jaw joint formed solely by the squamosal and dentary bones; there is no space in the jaw for the articular, a bone involved in the jaws of all early synapsids. The earliest clear evidence of hair or fur is in fossils of Castorocauda and Megaconus, from 164 million years ago in the mid-Jurassic. In the 1950s, it was suggested that the foramina (passages) in the maxillae and premaxillae (bones in the front of the upper jaw) of cynodonts were channels which supplied blood vessels and nerves to vibrissae (whiskers) and so were evidence of hair or fur; it was soon pointed out, however, that foramina do not necessarily show that an animal had vibrissae, as the modern lizard Tupinambis has foramina that are almost identical to those found in the nonmammalian cynodont Thrinaxodon. Popular sources, nevertheless, continue to attribute whiskers to Thrinaxodon. Studies on Permian coprolites suggest that non-mammalian synapsids of the epoch already had fur, setting the evolution of hairs possibly as far back as dicynodonts. When endothermy first appeared in the evolution of mammals is uncertain, though it is generally agreed to have first evolved in non-mammalian therapsids. Modern monotremes have lower body temperatures and more variable metabolic rates than marsupials and placentals, but there is evidence that some of their ancestors, perhaps including ancestors of the therians, may have had body temperatures like those of modern therians. Likewise, some modern therians like afrotheres and xenarthrans have secondarily developed lower body temperatures. The evolution of erect limbs in mammals is incomplete—living and fossil monotremes have sprawling limbs. The parasagittal (nonsprawling) limb posture appeared sometime in the late Jurassic or early Cretaceous; it is found in the eutherian Eomaia and the metatherian Sinodelphys, both dated to 125 million years ago. Epipubic bones, a feature that strongly influenced the reproduction of most mammal clades, are first found in Tritylodontidae, suggesting that it is a synapomorphy between them and mammaliformes. They are omnipresent in non-placental mammaliformes, though Megazostrodon and Erythrotherium appear to have lacked them. It has been suggested that the original function of lactation (milk production) was to keep eggs moist. Much of the argument is based on monotremes, the egg-laying mammals. In human females, mammary glands become fully developed during puberty, regardless of pregnancy. Rise of the mammals Therian mammals took over the medium- to large-sized ecological niches in the Cenozoic, after the Cretaceous–Paleogene extinction event approximately 66 million years ago emptied ecological space once filled by non-avian dinosaurs and other groups of reptiles, as well as various other mammal groups, and underwent an exponential increase in body size (megafauna). Then mammals diversified very quickly; both birds and mammals show an exponential rise in diversity. For example, the earliest known bat dates from about 50 million years ago, only 16 million years after the extinction of the non-avian dinosaurs. Molecular phylogenetic studies initially suggested that most placental orders diverged about 100 to 85 million years ago and that modern families appeared in the period from the late Eocene through the Miocene. However, no placental fossils have been found from before the end of the Cretaceous. The earliest undisputed fossils of placentals comes from the early Paleocene, after the extinction of the non-avian dinosaurs. In particular, scientists have identified an early Paleocene animal named Protungulatum donnae as one of the first placental mammals. however it has been reclassified as a non-placental eutherian. Recalibrations of genetic and morphological diversity rates have suggested a Late Cretaceous origin for placentals, and a Paleocene origin for most modern clades. The earliest known ancestor of primates is Archicebus achilles from around 55 million years ago. This tiny primate weighed 20–30 grams (0.7–1.1 ounce) and could fit within a human palm. Anatomy Distinguishing features Living mammal species can be identified by the presence of sweat glands, including those that are specialized to produce milk to nourish their young. In classifying fossils, however, other features must be used, since soft tissue glands and many other features are not visible in fossils. Many traits shared by all living mammals appeared among the earliest members of the group: Jaw joint – The dentary (the lower jaw bone, which carries the teeth) and the squamosal (a small cranial bone) meet to form the joint. In most gnathostomes, including early therapsids, the joint consists of the articular (a small bone at the back of the lower jaw) and quadrate (a small bone at the back of the upper jaw). Middle ear – In crown-group mammals, sound is carried from the eardrum by a chain of three bones, the malleus, the incus and the stapes. Ancestrally, the malleus and the incus are derived from the articular and the quadrate bones that constituted the jaw joint of early therapsids. Tooth replacement – Teeth can be replaced once (diphyodonty) or (as in toothed whales and murid rodents) not at all (monophyodonty). Elephants, manatees, and kangaroos continually grow new teeth throughout their life (polyphyodonty). Prismatic enamel – The enamel coating on the surface of a tooth consists of prisms, solid, rod-like structures extending from the dentin to the tooth's surface. Occipital condyles – Two knobs at the base of the skull fit into the topmost neck vertebra; most other tetrapods, in contrast, have only one such knob. For the most part, these characteristics were not present in the Triassic ancestors of the mammals. Nearly all mammaliaforms possess an epipubic bone, the exception being modern placentals. Sexual dimorphism On average, male mammals are larger than females, with males being at least 10% larger than females in over 45% of investigated species. Most mammalian orders also exhibit male-biased sexual dimorphism, although some orders do not show any bias or are significantly female-biased (Lagomorpha). Sexual size dimorphism increases with body size across mammals (Rensch's rule), suggesting that there are parallel selection pressures on both male and female size. Male-biased dimorphism relates to sexual selection on males through male–male competition for females, as there is a positive correlation between the degree of sexual selection, as indicated by mating systems, and the degree of male-biased size dimorphism. The degree of sexual selection is also positively correlated with male and female size across mammals. Further, parallel selection pressure on female mass is identified in that age at weaning is significantly higher in more polygynous species, even when correcting for body mass. Also, the reproductive rate is lower for larger females, indicating that fecundity selection selects for smaller females in mammals. Although these patterns hold across mammals as a whole, there is considerable variation across orders. Biological systems The majority of mammals have seven cervical vertebrae (bones in the neck). The exceptions are the manatee and the two-toed sloth, which have six, and the three-toed sloth which has nine. All mammalian brains possess a neocortex, a brain region unique to mammals. Placental brains have a corpus callosum, unlike monotremes and marsupials. The lungs of mammals are spongy and honeycombed. Breathing is mainly achieved with the diaphragm, which divides the thorax from the abdominal cavity, forming a dome convex to the thorax. Contraction of the diaphragm flattens the dome, increasing the volume of the lung cavity. Air enters through the oral and nasal cavities, and travels through the larynx, trachea and bronchi, and expands the alveoli. Relaxing the diaphragm has the opposite effect, decreasing the volume of the lung cavity, causing air to be pushed out of the lungs. During exercise, the abdominal wall contracts, increasing pressure on the diaphragm, which forces air out quicker and more forcefully. The rib cage is able to expand and contract the chest cavity through the action of other respiratory muscles. Consequently, air is sucked into or expelled out of the lungs, always moving down its pressure gradient. This type of lung is known as a bellows lung due to its resemblance to blacksmith bellows. The mammalian heart has four chambers, two upper atria, the receiving chambers, and two lower ventricles, the discharging chambers. The heart has four valves, which separate its chambers and ensures blood flows in the correct direction through the heart (preventing backflow). After gas exchange in the pulmonary capillaries (blood vessels in the lungs), oxygen-rich blood returns to the left atrium via one of the four pulmonary veins. Blood flows nearly continuously back into the atrium, which acts as the receiving chamber, and from here through an opening into the left ventricle. Most blood flows passively into the heart while both the atria and ventricles are relaxed, but toward the end of the ventricular relaxation period, the left atrium will contract, pumping blood into the ventricle. The heart also requires nutrients and oxygen found in blood like other muscles, and is supplied via coronary arteries. The integumentary system (skin) is made up of three layers: the outermost epidermis, the dermis and the hypodermis. The epidermis is typically 10 to 30 cells thick; its main function is to provide a waterproof layer. Its outermost cells are constantly lost; its bottommost cells are constantly dividing and pushing upward. The middle layer, the dermis, is 15 to 40 times thicker than the epidermis. The dermis is made up of many components, such as bony structures and blood vessels. The hypodermis is made up of adipose tissue, which stores lipids and provides cushioning and insulation. The thickness of this layer varies widely from species to species; marine mammals require a thick hypodermis (blubber) for insulation, and right whales have the thickest blubber at . Although other animals have features such as whiskers, feathers, setae, or cilia that superficially resemble it, no animals other than mammals have hair. It is a definitive characteristic of the class, though some mammals have very little. Herbivores have developed a diverse range of physical structures to facilitate the consumption of plant material. To break up intact plant tissues, mammals have developed teeth structures that reflect their feeding preferences. For instance, frugivores (animals that feed primarily on fruit) and herbivores that feed on soft foliage have low-crowned teeth specialized for grinding foliage and seeds. Grazing animals that tend to eat hard, silica-rich grasses, have high-crowned teeth, which are capable of grinding tough plant tissues and do not wear down as quickly as low-crowned teeth. Most carnivorous mammals have carnassialiforme teeth (of varying length depending on diet), long canines and similar tooth replacement patterns. The stomach of even-toed ungulates (Artiodactyla) is divided into four sections: the rumen, the reticulum, the omasum and the abomasum (only ruminants have a rumen). After the plant material is consumed, it is mixed with saliva in the rumen and reticulum and separates into solid and liquid material. The solids lump together to form a bolus (or cud), and is regurgitated. When the bolus enters the mouth, the fluid is squeezed out with the tongue and swallowed again. Ingested food passes to the rumen and reticulum where cellulolytic microbes (bacteria, protozoa and fungi) produce cellulase, which is needed to break down the cellulose in plants. Perissodactyls, in contrast to the ruminants, store digested food that has left the stomach in an enlarged cecum, where it is fermented by bacteria. Carnivora have a simple stomach adapted to digest primarily meat, as compared to the elaborate digestive systems of herbivorous animals, which are necessary to break down tough, complex plant fibers. The caecum is either absent or short and simple, and the large intestine is not sacculated or much wider than the small intestine. The mammalian excretory system involves many components. Like most other land animals, mammals are ureotelic, and convert ammonia into urea, which is done by the liver as part of the urea cycle. Bilirubin, a waste product derived from blood cells, is passed through bile and urine with the help of enzymes excreted by the liver. The passing of bilirubin via bile through the intestinal tract gives mammalian feces a distinctive brown coloration. Distinctive features of the mammalian kidney include the presence of the renal pelvis and renal pyramids, and of a clearly distinguishable cortex and medulla, which is due to the presence of elongated loops of Henle. Only the mammalian kidney has a bean shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds, cetaceans and bears. Most adult placental mammals have no remaining trace of the cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that has different fates depending on the sex of the individual: in females, it develops into the vestibule that receives the urethra and vagina, while in males it forms the entirety of the penile urethra. However, the tenrecs, golden moles, and some shrews retain a cloaca as adults. In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. Monotremes, which translates from Greek into "single hole", have a true cloaca. Sound production As in all other tetrapods, mammals have a larynx that can quickly open and close to produce sounds, and a supralaryngeal vocal tract which filters this sound. The lungs and surrounding musculature provide the air stream and pressure required to phonate. The larynx controls the pitch and volume of sound, but the strength the lungs exert to exhale also contributes to volume. More primitive mammals, such as the echidna, can only hiss, as sound is achieved solely through exhaling through a partially closed larynx. Other mammals phonate using vocal folds. The movement or tenseness of the vocal folds can result in many sounds such as purring and screaming. Mammals can change the position of the larynx, allowing them to breathe through the nose while swallowing through the mouth, and to form both oral and nasal sounds; nasal sounds, such as a dog whine, are generally soft sounds, and oral sounds, such as a dog bark, are generally loud. Some mammals have a large larynx and thus a low-pitched voice, namely the hammer-headed bat (Hypsignathus monstrosus) where the larynx can take up the entirety of the thoracic cavity while pushing the lungs, heart, and trachea into the abdomen. Large vocal pads can also lower the pitch, as in the low-pitched roars of big cats. The production of infrasound is possible in some mammals such as the African elephant (Loxodonta spp.) and baleen whales. Small mammals with small larynxes have the ability to produce ultrasound, which can be detected by modifications to the middle ear and cochlea. Ultrasound is inaudible to birds and reptiles, which might have been important during the Mesozoic, when birds and reptiles were the dominant predators. This private channel is used by some rodents in, for example, mother-to-pup communication, and by bats when echolocating. Toothed whales also use echolocation, but, as opposed to the vocal membrane that extends upward from the vocal folds, they have a melon to manipulate sounds. Some mammals, namely the primates, have air sacs attached to the larynx, which may function to lower the resonances or increase the volume of sound. The vocal production system is controlled by the cranial nerve nuclei in the brain, and supplied by the recurrent laryngeal nerve and the superior laryngeal nerve, branches of the vagus nerve. The vocal tract is supplied by the hypoglossal nerve and facial nerves. Electrical stimulation of the periaqueductal gray (PEG) region of the mammalian midbrain elicit vocalizations. The ability to learn new vocalizations is only exemplified in humans, seals, cetaceans, elephants and possibly bats; in humans, this is the result of a direct connection between the motor cortex, which controls movement, and the motor neurons in the spinal cord. Fur The primary function of the fur of mammals is thermoregulation. Others include protection, sensory purposes, waterproofing, and camouflage. Different types of fur serve different purposes: Definitive – which may be shed after reaching a certain length Vibrissae – sensory hairs, most commonly whiskers Pelage – guard hairs, under-fur, and awn hair Spines – stiff guard hair used for defense (such as in porcupines) Bristles – long hairs usually used in visual signals. (such as a lion's mane) Velli – often called "down fur" which insulates newborn mammals Wool – long, soft and often curly Thermoregulation Hair length is not a factor in thermoregulation: for example, some tropical mammals such as sloths have the same length of fur length as some arctic mammals but with less insulation; and, conversely, other tropical mammals with short hair have the same insulating value as arctic mammals. The denseness of fur can increase an animal's insulation value, and arctic mammals especially have dense fur; for example, the musk ox has guard hairs measuring as well as a dense underfur, which forms an airtight coat, allowing them to survive in temperatures of . Some desert mammals, such as camels, use dense fur to prevent solar heat from reaching their skin, allowing the animal to stay cool; a camel's fur may reach in the summer, but the skin stays at . Aquatic mammals, conversely, trap air in their fur to conserve heat by keeping the skin dry. Coloration Mammalian coats are colored for a variety of reasons, the major selective pressures including camouflage, sexual selection, communication, and thermoregulation. Coloration in both the hair and skin of mammals is mainly determined by the type and amount of melanin; eumelanins for brown and black colors and pheomelanin for a range of yellowish to reddish colors, giving mammals an earth tone. Some mammals have more vibrant colors; the mandrill has bright blue ridges on its muzzle which are produced by diffraction in facial collagen fibers. Many sloths appear green because their fur hosts green algae; this may be a symbiotic relation that affords camouflage to the sloths. Camouflage is a powerful influence in a large number of mammals, as it helps to conceal individuals from predators or prey. In arctic and subarctic mammals such as the arctic fox (Alopex lagopus), collared lemming (Dicrostonyx groenlandicus), stoat (Mustela erminea), and snowshoe hare (Lepus americanus), seasonal color change between brown in summer and white in winter is driven largely by camouflage. Some arboreal mammals, notably primates and marsupials, have shades of violet, green, or blue skin on parts of their bodies, indicating some distinct advantage in their largely arboreal habitat due to convergent evolution. Aposematism, warning off possible predators, is the most likely explanation of the black-and-white pelage of many mammals which are able to defend themselves, such as in the foul-smelling skunk and the powerful and aggressive honey badger. Coat color is sometimes sexually dimorphic, as in many primate species. Differences in female and male coat color may indicate nutrition and hormone levels, important in mate selection. Coat color may influence the ability to retain heat, depending on how much light is reflected. Mammals with a darker colored coat can absorb more heat from solar radiation, and stay warmer, and some smaller mammals, such as voles, have darker fur in the winter. The white, pigmentless fur of arctic mammals, such as the polar bear, may reflect more solar radiation directly onto the skin. The dazzling black-and-white striping of zebras appear to provide some protection from biting flies. Reproductive system Mammals are solely gonochoric (an animal is born with either male or female genitalia, as opposed to hermaphrodites where there is no such schism). In male placentals, the penis is used both for urination and copulation. Depending on the species, an erection may be fueled by blood flow into vascular, spongy tissue or by muscular action. A penis may be contained in a prepuce when not erect, and some placentals also have a penis bone (baculum). Marsupials typically have forked penises, while the echidna penis generally has four heads with only two functioning. The testes of most mammals descend into the scrotum which is typically posterior to the penis but is often anterior in marsupials. Female mammals generally have a clitoris, labia majora and labia minora on the outside, while the internal system contains paired oviducts, 1–2 uteri, 1–2 cervices and a vagina. Marsupials have two lateral vaginas and a medial vagina. The "vagina" of monotremes is better understood as a "urogenital sinus". The uterine systems of placental mammals can vary between a duplex, where there are two uteri and cervices which open into the vagina, a bipartite, where two uterine horns have a single cervix that connects to the vagina, a bicornuate, which consists where two uterine horns that are connected distally but separate medially creating a Y-shape, and a simplex, which has a single uterus. The ancestral condition for mammal reproduction is the birthing of relatively undeveloped, either through direct vivipary or a short period as soft-shelled eggs. This is likely due to the fact that the torso could not expand due to the presence of epipubic bones. The oldest demonstration of this reproductive style is with Kayentatherium, which produced undeveloped perinates, but at much higher litter sizes than any modern mammal, 38 specimens. Most modern mammals are viviparous, giving birth to live young. However, the five species of monotreme, the platypus and the four species of echidna, lay eggs. The monotremes have a sex determination system different from that of most other mammals. In particular, the sex chromosomes of a platypus are more like those of a chicken than those of a therian mammal. Viviparous mammals are in the subclass Theria; those living today are in the marsupial and placental infraclasses. Marsupials have a short gestation period, typically shorter than its estrous cycle and generally giving birth to a number of undeveloped newborns that then undergo further development; in many species, this takes place within a pouch-like sac, the marsupium, located in the front of the mother's abdomen. This is the plesiomorphic condition among viviparous mammals; the presence of epipubic bones in all non-placental mammals prevents the expansion of the torso needed for full pregnancy. Even non-placental eutherians probably reproduced this way. The placentals give birth to relatively complete and developed young, usually after long gestation periods. They get their name from the placenta, which connects the developing fetus to the uterine wall to allow nutrient uptake. In placental mammals, the epipubic is either completely lost or converted into the baculum; allowing the torso to be able to expand and thus birth developed offspring. The mammary glands of mammals are specialized to produce milk, the primary source of nutrition for newborns. The monotremes branched early from other mammals and do not have the nipples seen in most mammals, but they do have mammary glands. The young lick the milk from a mammary patch on the mother's belly. Compared to placental mammals, the milk of marsupials changes greatly in both production rate and in nutrient composition, due to the underdeveloped young. In addition, the mammary glands have more autonomy allowing them to supply separate milks to young at different development stages. Lactose is the main sugar in placental mammal milk while monotreme and marsupial milk is dominated by oligosaccharides. Weaning is the process in which a mammal becomes less dependent on their mother's milk and more on solid food. Endothermy Nearly all mammals are endothermic ("warm-blooded"). Most mammals also have hair to help keep them warm. Like birds, mammals can forage or hunt in weather and climates too cold for ectothermic ("cold-blooded") reptiles and insects. Endothermy requires plenty of food energy, so mammals eat more food per unit of body weight than most reptiles. Small insectivorous mammals eat prodigious amounts for their size. A rare exception, the naked mole-rat produces little metabolic heat, so it is considered an operational poikilotherm. Birds are also endothermic, so endothermy is not unique to mammals. Species lifespan Among mammals, species maximum lifespan varies significantly (for example the shrew has a lifespan of two years, whereas the oldest bowhead whale is recorded to be 211 years). Although the underlying basis for these lifespan differences is still uncertain, numerous studies indicate that the ability to repair DNA damage is an important determinant of mammalian lifespan. In a 1974 study by Hart and Setlow, it was found that DNA excision repair capability increased systematically with species lifespan among seven mammalian species. Species lifespan was observed to be robustly correlated with the capacity to recognize DNA double-strand breaks as well as the level of the DNA repair protein Ku80. In a study of the cells from sixteen mammalian species, genes employed in DNA repair were found to be up-regulated in the longer-lived species. The cellular level of the DNA repair enzyme poly ADP ribose polymerase was found to correlate with species lifespan in a study of 13 mammalian species. Three additional studies of a variety of mammalian species also reported a correlation between species lifespan and DNA repair capability. Locomotion Terrestrial Most vertebrates—the amphibians, the reptiles and some mammals such as humans and bears—are plantigrade, walking on the whole of the underside of the foot. Many mammals, such as cats and dogs, are digitigrade, walking on their toes, the greater stride length allowing more speed. Digitigrade mammals are also often adept at quiet movement. Some animals such as horses are unguligrade, walking on the tips of their toes. This even further increases their stride length and thus their speed. A few mammals, namely the great apes, are also known to walk on their knuckles, at least for their front legs. Giant anteaters and platypuses are also knuckle-walkers. Some mammals are bipeds, using only two limbs for locomotion, which can be seen in, for example, humans and the great apes. Bipedal species have a larger field of vision than quadrupeds, conserve more energy and have the ability to manipulate objects with their hands, which aids in foraging. Instead of walking, some bipeds hop, such as kangaroos and kangaroo rats. Animals will use different gaits for different speeds, terrain and situations. For example, horses show four natural gaits, the slowest horse gait is the walk, then there are three faster gaits which, from slowest to fastest, are the trot, the canter and the gallop. Animals may also have unusual gaits that are used occasionally, such as for moving sideways or backwards. For example, the main human gaits are bipedal walking and running, but they employ many other gaits occasionally, including a four-legged crawl in tight spaces. Mammals show a vast range of gaits, the order that they place and lift their appendages in locomotion. Gaits can be grouped into categories according to their patterns of support sequence. For quadrupeds, there are three main categories: walking gaits, running gaits and leaping gaits. Walking is the most common gait, where some feet are on the ground at any given time, and found in almost all legged animals. Running is considered to occur when at some points in the stride all feet are off the ground in a moment of suspension. Arboreal Arboreal animals frequently have elongated limbs that help them cross gaps, reach fruit or other resources, test the firmness of support ahead and, in some cases, to brachiate (swing between trees). Many arboreal species, such as tree porcupines, silky anteaters, spider monkeys, and possums, use prehensile tails to grasp branches. In the spider monkey, the tip of the tail has either a bare patch or adhesive pad, which provides increased friction. Claws can be used to interact with rough substrates and reorient the direction of forces the animal applies. This is what allows squirrels to climb tree trunks that are so large to be essentially flat from the perspective of such a small animal. However, claws can interfere with an animal's ability to grasp very small branches, as they may wrap too far around and prick the animal's own paw. Frictional gripping is used by primates, relying upon hairless fingertips. Squeezing the branch between the fingertips generates frictional force that holds the animal's hand to the branch. However, this type of grip depends upon the angle of the frictional force, thus upon the diameter of the branch, with larger branches resulting in reduced gripping ability. To control descent, especially down large diameter branches, some arboreal animals such as squirrels have evolved highly mobile ankle joints that permit rotating the foot into a 'reversed' posture. This allows the claws to hook into the rough surface of the bark, opposing the force of gravity. Small size provides many advantages to arboreal species: such as increasing the relative size of branches to the animal, lower center of mass, increased stability, lower mass (allowing movement on smaller branches) and the ability to move through more cluttered habitat. Size relating to weight affects gliding animals such as the sugar glider. Some species of primate, bat and all species of sloth achieve passive stability by hanging beneath the branch. Both pitching and tipping become irrelevant, as the only method of failure would be losing their grip. Aerial Bats are the only mammals that can truly fly. They fly through the air at a constant speed by moving their wings up and down (usually with some fore-aft movement as well). Because the animal is in motion, there is some airflow relative to its body which, combined with the velocity of the wings, generates a faster airflow moving over the wing. This generates a lift force vector pointing forwards and upwards, and a drag force vector pointing rearwards and upwards. The upwards components of these counteract gravity, keeping the body in the air, while the forward component provides thrust to counteract both the drag from the wing and from the body as a whole. The wings of bats are much thinner and consist of more bones than those of birds, allowing bats to maneuver more accurately and fly with more lift and less drag. By folding the wings inwards towards their body on the upstroke, they use 35% less energy during flight than birds. The membranes are delicate, ripping easily; however, the tissue of the bat's membrane is able to regrow, such that small tears can heal quickly. The surface of their wings is equipped with touch-sensitive receptors on small bumps called Merkel cells, also found on human fingertips. These sensitive areas are different in bats, as each bump has a tiny hair in the center, making it even more sensitive and allowing the bat to detect and collect information about the air flowing over its wings, and to fly more efficiently by changing the shape of its wings in response. Fossorial and subterranean A fossorial (from Latin fossor, meaning "digger") is an animal adapted to digging which lives primarily, but not solely, underground. Some examples are badgers, and naked mole-rats. Many rodent species are also considered fossorial because they live in burrows for most but not all of the day. Species that live exclusively underground are subterranean, and those with limited adaptations to a fossorial lifestyle sub-fossorial. Some organisms are fossorial to aid in temperature regulation while others use the underground habitat for protection from predators or for food storage. Fossorial mammals have a fusiform body, thickest at the shoulders and tapering off at the tail and nose. Unable to see in the dark burrows, most have degenerated eyes, but degeneration varies between species; pocket gophers, for example, are only semi-fossorial and have very small yet functional eyes, in the fully fossorial marsupial mole the eyes are degenerated and useless, talpa moles have vestigial eyes and the cape golden mole has a layer of skin covering the eyes. External ears flaps are also very small or absent. Truly fossorial mammals have short, stout legs as strength is more important than speed to a burrowing mammal, but semi-fossorial mammals have cursorial legs. The front paws are broad and have strong claws to help in loosening dirt while excavating burrows, and the back paws have webbing, as well as claws, which aids in throwing loosened dirt backwards. Most have large incisors to prevent dirt from flying into their mouth. Many fossorial mammals such as shrews, hedgehogs, and moles were classified under the now obsolete order Insectivora. Aquatic Fully aquatic mammals, the cetaceans and sirenians, have lost their legs and have a tail fin to propel themselves through the water. Flipper movement is continuous. Whales swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin to prevent themselves from turning upside-down in the water. The flukes of sirenians are raised up and down in long strokes to move the animal forward, and can be twisted to turn. The forelimbs are paddle-like flippers which aid in turning and slowing. Semi-aquatic mammals, like pinnipeds, have two pairs of flippers on the front and back, the fore-flippers and hind-flippers. The elbows and ankles are enclosed within the body. Pinnipeds have several adaptions for reducing drag. In addition to their streamlined bodies, they have smooth networks of muscle bundles in their skin that may increase laminar flow and make it easier for them to slip through water. They also lack arrector pili, so their fur can be streamlined as they swim. They rely on their fore-flippers for locomotion in a wing-like manner similar to penguins and sea turtles. Fore-flipper movement is not continuous, and the animal glides between each stroke. Compared to terrestrial carnivorans, the fore-limbs are reduced in length, which gives the locomotor muscles at the shoulder and elbow joints greater mechanical advantage; the hind-flippers serve as stabilizers. Other semi-aquatic mammals include beavers, hippopotamuses, otters and platypuses. Hippos are very large semi-aquatic mammals, and their barrel-shaped bodies have graviportal skeletal structures, adapted to carrying their enormous weight, and their specific gravity allows them to sink and move along the bottom of a river. Behavior Communication and vocalization Many mammals communicate by vocalizing. Vocal communication serves many purposes, including in mating rituals, as warning calls, to indicate food sources, and for social purposes. Males often call during mating rituals to ward off other males and to attract females, as in the roaring of lions and red deer. The songs of the humpback whale may be signals to females; they have different dialects in different regions of the ocean. Social vocalizations include the territorial calls of gibbons, and the use of frequency in greater spear-nosed bats to distinguish between groups. The vervet monkey gives a distinct alarm call for each of at least four different predators, and the reactions of other monkeys vary according to the call. For example, if an alarm call signals a python, the monkeys climb into the trees, whereas the eagle alarm causes monkeys to seek a hiding place on the ground. Prairie dogs similarly have complex calls that signal the type, size, and speed of an approaching predator. Elephants communicate socially with a variety of sounds including snorting, screaming, trumpeting, | mammals, the cohort called placentals, have a placenta, which enables the feeding of the fetus during gestation. Most mammals are intelligent, with some possessing large brains, self-awareness, and tool use. Mammals can communicate and vocalize in several ways, including the production of ultrasound, scent-marking, alarm signals, singing, and echolocation. Mammals can organize themselves into fission-fusion societies, harems, and hierarchies—but can also be solitary and territorial. Most mammals are polygynous, but some can be monogamous or polyandrous. Domestication of many types of mammals by humans played a major role in the Neolithic revolution, and resulted in farming replacing hunting and gathering as the primary source of food for humans. This led to a major restructuring of human societies from nomadic to sedentary, with more co-operation among larger and larger groups, and ultimately the development of the first civilizations. Domesticated mammals provided, and continue to provide, power for transport and agriculture, as well as food (meat and dairy products), fur, and leather. Mammals are also hunted and raced for sport, and are used as model organisms in science. Mammals have been depicted in art since Paleolithic times, and appear in literature, film, mythology, and religion. Decline in numbers and extinction of many mammals is primarily driven by human poaching and habitat destruction, primarily deforestation. Classification Mammal classification has been through several revisions since Carl Linnaeus initially defined the class, and at present, no classification system is universally accepted. McKenna & Bell (1997) and Wilson & Reeder (2005) provide useful recent compendiums. Simpson (1945) provides systematics of mammal origins and relationships that had been taught universally until the end of the 20th century. However, since 1945, a large amount of new and more detailed information has gradually been found: The paleontological record has been recalibrated, and the intervening years have seen much debate and progress concerning the theoretical underpinnings of systematization itself, partly through the new concept of cladistics. Though fieldwork and lab work progressively outdated Simpson's classification, it remains the closest thing to an official classification of mammals, despite its known issues. Most mammals, including the six most species-rich orders, belong to the placental group. The three largest orders in numbers of species are Rodentia: mice, rats, porcupines, beavers, capybaras, and other gnawing mammals; Chiroptera: bats; and Soricomorpha: shrews, moles, and solenodons. The next three biggest orders, depending on the biological classification scheme used, are the Primates: apes, monkeys, and lemurs; the Cetartiodactyla: whales and even-toed ungulates; and the Carnivora which includes cats, dogs, weasels, bears, seals, and allies. According to Mammal Species of the World, 5,416 species were identified in 2006. These were grouped into 1,229 genera, 153 families and 29 orders. In 2008, the International Union for Conservation of Nature (IUCN) completed a five-year Global Mammal Assessment for its IUCN Red List, which counted 5,488 species. According to research published in the Journal of Mammalogy in 2018, the number of recognized mammal species is 6,495, including 96 recently extinct. Definitions The word "mammal" is modern, from the scientific name Mammalia coined by Carl Linnaeus in 1758, derived from the Latin mamma ("teat, pap"). In an influential 1988 paper, Timothy Rowe defined Mammalia phylogenetically as the crown group of mammals, the clade consisting of the most recent common ancestor of living monotremes (echidnas and platypuses) and Therian mammals (marsupials and placentals) and all descendants of that ancestor. Since this ancestor lived in the Jurassic period, Rowe's definition excludes all animals from the earlier Triassic, despite the fact that Triassic fossils in the Haramiyida have been referred to the Mammalia since the mid-19th century. If Mammalia is considered as the crown group, its origin can be roughly dated as the first known appearance of animals more closely related to some extant mammals than to others. Ambondro is more closely related to monotremes than to therian mammals while Amphilestes and Amphitherium are more closely related to the therians; as fossils of all three genera are dated about in the Middle Jurassic, this is a reasonable estimate for the appearance of the crown group. T.S. Kemp has provided a more traditional definition: "Synapsids that possess a dentary–squamosal jaw articulation and occlusion between upper and lower molars with a transverse component to the movement" or, equivalently in Kemp's view, the clade originating with the last common ancestor of Sinoconodon and living mammals. The earliest known synapsid satisfying Kemp's definitions is Tikitherium, dated , so the appearance of mammals in this broader sense can be given this Late Triassic date. McKenna/Bell classification In 1997, the mammals were comprehensively revised by Malcolm C. McKenna and Susan K. Bell, which has resulted in the McKenna/Bell classification. The authors worked together as paleontologists at the American Museum of Natural History. McKenna inherited the project from Simpson and, with Bell, constructed a completely updated hierarchical system, covering living and extinct taxa, that reflects the historical genealogy of Mammalia. Their 1997 book, Classification of Mammals above the Species Level, is a comprehensive work on the systematics, relationships and occurrences of all mammal taxa, living and extinct, down through the rank of genus, though molecular genetic data challenge several of the higher-level groupings. In the following list, extinct groups are labelled with a dagger (†). Class Mammalia Subclass Prototheria: monotremes: echidnas and the platypus Subclass Theriiformes: live-bearing mammals and their prehistoric relatives Infraclass †Allotheria: multituberculates Infraclass †Eutriconodonta: eutriconodonts Infraclass Holotheria: modern live-bearing mammals and their prehistoric relatives Superlegion †Kuehneotheria Supercohort Theria: live-bearing mammals Cohort Marsupialia: marsupials Magnorder Australidelphia: Australian marsupials and the monito del monte Magnorder Ameridelphia: New World marsupials. Now considered paraphyletic, with shrew opossums being closer to australidelphians. Cohort Placentalia: placentals Magnorder Xenarthra: xenarthrans Magnorder Epitheria: epitheres Superorder †Leptictida Superorder Preptotheria Grandorder Anagalida: lagomorphs, rodents and elephant shrews Grandorder Ferae: carnivorans, pangolins, †creodonts and relatives Grandorder Lipotyphla: insectivorans Grandorder Archonta: bats, primates, colugos and treeshrews Grandorder Ungulata: ungulates Order Tubulidentata incertae sedis: aardvark Mirorder Eparctocyona: †condylarths, whales and artiodactyls (even-toed ungulates) Mirorder †Meridiungulata: South American ungulates Mirorder Altungulata: perissodactyls (odd-toed ungulates), elephants, manatees and hyraxes Molecular classification of placentals As of the early 21st century, molecular studies based on DNA analysis have suggested new relationships among mammal families. Most of these findings have been independently validated by retrotransposon presence/absence data. Classification systems based on molecular studies reveal three major groups or lineages of placental mammals—Afrotheria, Xenarthra and Boreoeutheria—which diverged in the Cretaceous. The relationships between these three lineages is contentious, and all three possible hypotheses have been proposed with respect to which group is basal. These hypotheses are Atlantogenata (basal Boreoeutheria), Epitheria (basal Xenarthra) and Exafroplacentalia (basal Afrotheria). Boreoeutheria in turn contains two major lineages—Euarchontoglires and Laurasiatheria. Estimates for the divergence times between these three placental groups range from 105 to 120 million years ago, depending on the type of DNA used (such as nuclear or mitochondrial) and varying interpretations of paleogeographic data. The cladogram above is based on Tarver et al. (2016) Group I: Superorder Afrotheria Clade Afroinsectiphilia Order Macroscelidea: elephant shrews (Africa) Order Afrosoricida: tenrecs and golden moles (Africa) Order Tubulidentata: aardvark (Africa south of the Sahara) Clade Paenungulata Order Hyracoidea: hyraxes or dassies (Africa, Arabia) Order Proboscidea: elephants (Africa, Southeast Asia) Order Sirenia: dugong and manatees (cosmopolitan tropical) Group II: Superorder Xenarthra Order Pilosa: sloths and anteaters (neotropical) Order Cingulata: armadillos and extinct relatives (Americas) Group III: Magnaorder Boreoeutheria Superorder: Euarchontoglires (Supraprimates) Grandorder Euarchonta Order Scandentia: treeshrews (Southeast Asia). Order Dermoptera: flying lemurs or colugos (Southeast Asia) Order Primates: lemurs, bushbabies, monkeys, apes, humans (cosmopolitan) Grandorder Glires Order Lagomorpha: pikas, rabbits, hares (Eurasia, Africa, Americas) Order Rodentia: rodents (cosmopolitan) Superorder: Laurasiatheria Order Eulipotyphla: shrews, hedgehogs, moles, solenodons Clade Scrotifera Order Chiroptera: bats (cosmopolitan) Clade Fereuungulata Clade Ferae Order Pholidota: pangolins or scaly anteaters (Africa, South Asia) Order Carnivora: carnivores (cosmopolitan), including cats and dogs Clade Euungulata Order Cetartiodactyla: cetaceans (whales, dolphins and porpoises) and even-toed ungulates, including pigs, cattle, deer and giraffes Order Perissodactyla: odd-toed ungulates, including horses, donkeys, zebras, tapirs and rhinoceroses Evolution Origins Synapsida, a clade that contains mammals and their extinct relatives, originated during the Pennsylvanian subperiod (~323 million to ~300 million years ago), when they split from the reptile lineage. Crown group mammals evolved from earlier mammaliaforms during the Early Jurassic. The cladogram takes Mammalia to be the crown group. Evolution from older amniotes The first fully terrestrial vertebrates were amniotes. Like their amphibious early tetrapod predecessors, they had lungs and limbs. Amniotic eggs, however, have internal membranes that allow the developing embryo to breathe but keep water in. Hence, amniotes can lay eggs on dry land, while amphibians generally need to lay their eggs in water. The first amniotes apparently arose in the Pennsylvanian subperiod of the Carboniferous. They descended from earlier reptiliomorph amphibious tetrapods, which lived on land that was already inhabited by insects and other invertebrates as well as ferns, mosses and other plants. Within a few million years, two important amniote lineages became distinct: the synapsids, which would later include the common ancestor of the mammals; and the sauropsids, which now include turtles, lizards, snakes, crocodilians and dinosaurs (including birds). Synapsids have a single hole (temporal fenestra) low on each side of the skull. Primitive synapsids included the largest and fiercest animals of the early Permian such as Dimetrodon. Nonmammalian synapsids were traditionally - and incorrectly - called "mammal-like reptiles" or pelycosaurs; we now know they were neither reptiles nor part of reptile lineage. Therapsids, a group of synapsids, evolved in the Middle Permian, about 265 million years ago, and became the dominant land vertebrates. They differ from basal eupelycosaurs in several features of the skull and jaws, including: larger skulls and incisors which are equal in size in therapsids, but not for eupelycosaurs. The therapsid lineage leading to mammals went through a series of stages, beginning with animals that were very similar to their early synapsid ancestors and ending with probainognathian cynodonts, some of which could easily be mistaken for mammals. Those stages were characterized by: The gradual development of a bony secondary palate. Progression towards an erect limb posture, which would increase the animals' stamina by avoiding Carrier's constraint. But this process was slow and erratic: for example, all herbivorous nonmammaliaform therapsids retained sprawling limbs (some late forms may have had semierect hind limbs); Permian carnivorous therapsids had sprawling forelimbs, and some late Permian ones also had semisprawling hindlimbs. In fact, modern monotremes still have semisprawling limbs. The dentary gradually became the main bone of the lower jaw which, by the Triassic, progressed towards the fully mammalian jaw (the lower consisting only of the dentary) and middle ear (which is constructed by the bones that were previously used to construct the jaws of reptiles). First mammals The Permian–Triassic extinction event about 252 million years ago, which was a prolonged event due to the accumulation of several extinction pulses, ended the dominance of carnivorous therapsids. In the early Triassic, most medium to large land carnivore niches were taken over by archosaurs which, over an extended period (35 million years), came to include the crocodylomorphs, the pterosaurs and the dinosaurs; however, large cynodonts like Trucidocynodon and traversodontids still occupied large sized carnivorous and herbivorous niches respectively. By the Jurassic, the dinosaurs had come to dominate the large terrestrial herbivore niches as well. The first mammals (in Kemp's sense) appeared in the Late Triassic epoch (about 225 million years ago), 40 million years after the first therapsids. They expanded out of their nocturnal insectivore niche from the mid-Jurassic onwards; The Jurassic Castorocauda, for example, was a close relative of true mammals that had adaptations for swimming, digging and catching fish. Most, if not all, are thought to have remained nocturnal (the nocturnal bottleneck), accounting for much of the typical mammalian traits. The majority of the mammal species that existed in the Mesozoic Era were multituberculates, eutriconodonts and spalacotheriids. The earliest known metatherian is Sinodelphys, found in 125 million-year-old Early Cretaceous shale in China's northeastern Liaoning Province. The fossil is nearly complete and includes tufts of fur and imprints of soft tissues. The oldest known fossil among the Eutheria ("true beasts") is the small shrewlike Juramaia sinensis, or "Jurassic mother from China", dated to 160 million years ago in the late Jurassic. A later eutherian relative, Eomaia, dated to 125 million years ago in the early Cretaceous, possessed some features in common with the marsupials but not with the placentals, evidence that these features were present in the last common ancestor of the two groups but were later lost in the placental lineage. In particular, the epipubic bones extend forwards from the pelvis. These are not found in any modern placental, but they are found in marsupials, monotremes, other nontherian mammals and Ukhaatherium, an early Cretaceous animal in the eutherian order Asioryctitheria. This also applies to the multituberculates. They are apparently an ancestral feature, which subsequently disappeared in the placental lineage. These epipubic bones seem to function by stiffening the muscles during locomotion, reducing the amount of space being presented, which placentals require to contain their fetus during gestation periods. A narrow pelvic outlet indicates that the young were very small at birth and therefore pregnancy was short, as in modern marsupials. This suggests that the placenta was a later development. One of the earliest known monotremes was Teinolophos, which lived about 120 million years ago in Australia. Monotremes have some features which may be inherited from the original amniotes such as the same orifice to urinate, defecate and reproduce (cloaca)—as lizards and birds also do— and they lay eggs which are leathery and uncalcified. Earliest appearances of features Hadrocodium, whose fossils date from approximately 195 million years ago, in the early Jurassic, provides the first clear evidence of a jaw joint formed solely by the squamosal and dentary bones; there is no space in the jaw for the articular, a bone involved in the jaws of all early synapsids. The earliest clear evidence of hair or fur is in fossils of Castorocauda and Megaconus, from 164 million years ago in the mid-Jurassic. In the 1950s, it was suggested that the foramina (passages) in the maxillae and premaxillae (bones in the front of the upper jaw) of cynodonts were channels which supplied blood vessels and nerves to vibrissae (whiskers) and so were evidence of hair or fur; it was soon pointed out, however, that foramina do not necessarily show that an animal had vibrissae, as the modern lizard Tupinambis has foramina that are almost identical to those found in the nonmammalian cynodont Thrinaxodon. Popular sources, nevertheless, continue to attribute whiskers to Thrinaxodon. Studies on Permian coprolites suggest that non-mammalian synapsids of the epoch already had fur, setting the evolution of hairs possibly as far back as dicynodonts. When endothermy first appeared in the evolution of mammals is uncertain, though it is generally agreed to have first evolved in non-mammalian therapsids. Modern monotremes have lower body temperatures and more variable metabolic rates than marsupials and placentals, but there is evidence that some of their ancestors, perhaps including ancestors of the therians, may have had body temperatures like those of modern therians. Likewise, some modern therians like afrotheres and xenarthrans have secondarily developed lower body temperatures. The evolution of erect limbs in mammals is incomplete—living and fossil monotremes have sprawling limbs. The parasagittal (nonsprawling) limb posture appeared sometime in the late Jurassic or early Cretaceous; it is found in the eutherian Eomaia and the metatherian Sinodelphys, both dated to 125 million years ago. Epipubic bones, a feature that strongly influenced the reproduction of most mammal clades, are first found in Tritylodontidae, suggesting that it is a synapomorphy between them and mammaliformes. They are omnipresent in non-placental mammaliformes, though Megazostrodon and Erythrotherium appear to have lacked them. It has been suggested that the original function of lactation (milk production) was to keep eggs moist. Much of the argument is based on monotremes, the egg-laying mammals. In human females, mammary glands become fully developed during puberty, regardless of pregnancy. Rise of the mammals Therian mammals took over the medium- to large-sized ecological niches in the Cenozoic, after the Cretaceous–Paleogene extinction event approximately 66 million years ago emptied ecological space once filled by non-avian dinosaurs and other groups of reptiles, as well as various other mammal groups, and underwent an exponential increase in body size (megafauna). Then mammals diversified very quickly; both birds and mammals show an exponential rise in diversity. For example, the earliest known bat dates from about 50 million years ago, only 16 million years after the extinction of the non-avian dinosaurs. Molecular phylogenetic studies initially suggested that most placental orders diverged about 100 to 85 million years ago and that modern families appeared in the period from the late Eocene through the Miocene. However, no placental fossils have been found from before the end of the Cretaceous. The earliest undisputed fossils of placentals comes from the early Paleocene, after the extinction of the non-avian dinosaurs. In particular, scientists have identified an early Paleocene animal named Protungulatum donnae as one of the first placental mammals. however it has been reclassified as a non-placental eutherian. Recalibrations of genetic and morphological diversity rates have suggested a Late Cretaceous origin for placentals, and a Paleocene origin for most modern clades. The earliest known ancestor of primates is Archicebus achilles from around 55 million years ago. This tiny primate weighed 20–30 grams (0.7–1.1 ounce) and could fit within a human palm. Anatomy Distinguishing features Living mammal species can be identified by the presence of sweat glands, including those that are specialized to produce milk to nourish their young. In classifying fossils, however, other features must be used, since soft tissue glands and many other features are not visible in fossils. Many traits shared by all living mammals appeared among the earliest members of the group: Jaw joint – The dentary (the lower jaw bone, which carries the teeth) and the squamosal (a small cranial bone) meet to form the joint. In most gnathostomes, including early therapsids, the joint consists of the articular (a small bone at the back of the lower jaw) and quadrate (a small bone at the back of the upper jaw). Middle ear – In crown-group mammals, sound is carried from the eardrum by a chain of three bones, the malleus, the incus and the stapes. Ancestrally, the malleus and the incus are derived from the articular and the quadrate bones that constituted the jaw joint of early therapsids. Tooth replacement – Teeth can be replaced once (diphyodonty) or (as in toothed whales and murid rodents) not at all (monophyodonty). Elephants, manatees, and kangaroos continually grow new teeth throughout their life (polyphyodonty). Prismatic enamel – The enamel coating on the surface of a tooth consists of prisms, solid, rod-like structures extending from the dentin to the tooth's surface. Occipital condyles – Two knobs at the base of the skull fit into the topmost neck vertebra; most other tetrapods, in contrast, have only one such knob. For the most part, these characteristics were not present in the Triassic ancestors of the mammals. Nearly all mammaliaforms possess an epipubic bone, the exception being modern placentals. Sexual dimorphism On average, male mammals are larger than females, with males being at least 10% larger than females in over 45% of investigated species. Most mammalian orders also exhibit male-biased sexual dimorphism, although some orders do not show any bias or are significantly female-biased (Lagomorpha). Sexual size dimorphism increases with body size across mammals (Rensch's rule), suggesting that there are parallel selection pressures on both male and female size. Male-biased dimorphism relates to sexual selection on males through male–male competition for females, as there is a positive correlation between the degree of sexual selection, as indicated by mating systems, and the degree of male-biased size dimorphism. The degree of sexual selection is also positively correlated with male and female size across mammals. Further, parallel selection pressure on female mass is identified in that age at weaning is significantly higher in more polygynous species, even when correcting for body mass. Also, the reproductive rate is lower for larger females, indicating that fecundity selection selects for smaller females in mammals. Although these patterns hold across mammals as a whole, there is considerable variation across orders. Biological systems The majority of mammals have seven cervical vertebrae (bones in the neck). The exceptions are the manatee and the two-toed sloth, which have six, and the three-toed sloth which has nine. All mammalian brains possess a neocortex, a brain region unique to mammals. Placental brains have a corpus callosum, unlike monotremes and marsupials. The lungs of mammals are spongy and honeycombed. Breathing is mainly achieved with the diaphragm, which divides the thorax from the abdominal cavity, forming a dome convex to the thorax. Contraction of the diaphragm flattens the dome, increasing the volume of the lung cavity. Air enters through the oral and nasal cavities, and travels through the larynx, trachea and bronchi, and expands the alveoli. Relaxing the diaphragm has the opposite effect, decreasing the volume of the lung cavity, causing air to be pushed out of the lungs. During exercise, the abdominal wall contracts, increasing pressure on the diaphragm, which forces air out quicker and more forcefully. The rib cage is able to expand and contract the chest cavity through the action of other respiratory muscles. Consequently, air is sucked into or expelled out of the lungs, always moving down its pressure gradient. This type of lung is known as a bellows lung due to its resemblance to blacksmith bellows. The mammalian heart has four chambers, two upper atria, the receiving chambers, and two lower ventricles, the discharging chambers. The heart has four valves, which separate its chambers and ensures blood flows in the correct direction through the heart (preventing backflow). After gas exchange in the pulmonary capillaries (blood vessels in the lungs), oxygen-rich blood returns to the left atrium via one of the four pulmonary veins. Blood flows nearly continuously back into the atrium, which acts as the receiving chamber, and from here through an opening into the left ventricle. Most blood flows passively into the heart while both the atria and ventricles are relaxed, but toward the end of the ventricular relaxation period, the left atrium will contract, pumping blood into the ventricle. The heart also requires nutrients and oxygen found in blood like other muscles, and is supplied via coronary arteries. The integumentary system (skin) is made up of three layers: the outermost epidermis, the dermis and the hypodermis. The epidermis is typically 10 to 30 cells thick; its main function is to provide a waterproof layer. Its outermost cells are constantly lost; its bottommost cells are constantly dividing and pushing upward. The middle layer, the dermis, is 15 to 40 times thicker than the epidermis. The dermis is made up of many components, such as bony structures and blood vessels. The hypodermis is made up of adipose tissue, which stores lipids and provides cushioning and insulation. The thickness of this layer varies widely from species to species; marine mammals require a thick hypodermis (blubber) for insulation, and right whales have the thickest blubber at . Although other animals have features such as whiskers, feathers, setae, or cilia that superficially resemble it, no animals other than mammals have hair. It is a definitive characteristic of the class, though some mammals have very little. Herbivores have developed a diverse range of physical structures to facilitate the consumption of plant material. To break up intact plant tissues, mammals have developed teeth structures that reflect their feeding preferences. For instance, frugivores (animals that feed primarily on fruit) and herbivores that feed on soft foliage have low-crowned teeth specialized for grinding foliage and seeds. Grazing animals that tend to eat hard, silica-rich grasses, have high-crowned teeth, which are capable of grinding tough plant tissues and do not wear down as quickly as low-crowned teeth. Most carnivorous mammals have carnassialiforme teeth (of varying length depending on diet), long canines and similar tooth replacement patterns. The stomach of even-toed ungulates (Artiodactyla) is divided into four sections: the rumen, the reticulum, the omasum and the abomasum (only ruminants have a rumen). After the plant material is consumed, it is mixed with saliva in the rumen and reticulum and separates into solid and liquid material. The solids lump together to form a bolus (or cud), and is regurgitated. When the bolus enters the mouth, the fluid is squeezed out with the tongue and swallowed again. Ingested food passes to the rumen and reticulum where cellulolytic microbes (bacteria, protozoa and fungi) produce cellulase, which is needed to break down the cellulose in plants. Perissodactyls, in contrast to the ruminants, store digested food that has left the stomach in an enlarged cecum, where it is fermented by bacteria. Carnivora have a simple stomach adapted to digest primarily meat, as compared to the elaborate digestive systems of herbivorous animals, which are necessary to break down tough, complex plant fibers. The caecum is either absent or short and simple, and the large intestine is not sacculated or much wider than the small intestine. The mammalian excretory system involves many components. Like most other land animals, mammals are ureotelic, and convert ammonia into urea, which is done by the liver as part of the urea cycle. Bilirubin, a waste product derived from blood cells, is passed through bile and urine with the help of enzymes excreted by the liver. The passing of bilirubin via bile through the intestinal tract gives mammalian feces a distinctive brown coloration. Distinctive features of the mammalian kidney include the presence of the renal pelvis and renal pyramids, and of a clearly distinguishable cortex and medulla, which is due to the presence of elongated loops of Henle. Only the mammalian kidney has a bean shape, although there are some exceptions, such as the multilobed reniculate kidneys of pinnipeds, cetaceans and bears. Most adult placental mammals have no remaining trace of the cloaca. In the embryo, the embryonic cloaca divides into a posterior region that becomes part of the anus, and an anterior region that has different fates depending on the sex of the individual: in females, it develops into the vestibule that receives the urethra and vagina, while in males it forms the entirety of the penile urethra. However, the tenrecs, golden moles, and some shrews retain a cloaca as adults. In marsupials, the genital tract is separate from the anus, but a trace of the original cloaca does remain externally. Monotremes, which translates from Greek into "single hole", have a true cloaca. Sound production As in all other tetrapods, mammals have a larynx that can quickly open and close to produce sounds, and a supralaryngeal vocal tract which filters this sound. The lungs and surrounding musculature provide the air stream and pressure required to phonate. The larynx controls the pitch and volume of sound, but the strength the lungs exert to exhale also contributes to volume. More primitive mammals, such as the echidna, can only hiss, as sound is achieved solely through exhaling through a partially closed larynx. Other mammals phonate using vocal folds. The movement or tenseness of the vocal folds can result in many sounds such as purring and screaming. Mammals can change the position of the larynx, allowing them to breathe through the nose while swallowing through the mouth, and to form both oral and nasal sounds; nasal sounds, such as a dog whine, are generally soft sounds, and oral sounds, such as a dog bark, are generally loud. Some mammals have a large larynx and thus a low-pitched voice, namely the hammer-headed bat (Hypsignathus monstrosus) where the larynx can take up the entirety of the thoracic cavity while pushing the lungs, heart, and trachea into the abdomen. Large vocal pads can also lower the pitch, as in the low-pitched roars of big cats. The production of infrasound is possible in some mammals such as the African elephant (Loxodonta spp.) and baleen whales. Small mammals with small larynxes have the ability to produce ultrasound, which can be detected by modifications to the middle ear and cochlea. Ultrasound is inaudible to birds and reptiles, which might have been important during the Mesozoic, when birds and reptiles were the dominant predators. This private channel is used by some rodents in, for example, mother-to-pup communication, and by bats when echolocating. Toothed whales also use echolocation, but, as opposed to the vocal membrane that extends upward from the vocal folds, they have a melon to manipulate sounds. Some mammals, namely the primates, have air sacs attached to the larynx, which may function to lower the resonances or increase the volume of sound. The vocal production system is controlled by the cranial nerve nuclei in the brain, and supplied by the recurrent laryngeal nerve and the superior laryngeal nerve, branches of the vagus nerve. The vocal tract is supplied by the hypoglossal nerve and facial nerves. Electrical stimulation of the periaqueductal gray (PEG) region of the mammalian midbrain elicit vocalizations. The ability to learn new vocalizations is only exemplified in humans, seals, cetaceans, elephants and possibly bats; in humans, this is the result of a direct connection between the motor cortex, which controls movement, and the motor neurons in the spinal cord. Fur The primary function of the fur of mammals is thermoregulation. Others include protection, sensory purposes, waterproofing, and camouflage. Different types of fur serve different purposes: Definitive – which may be shed after reaching a certain length Vibrissae – sensory hairs, most commonly whiskers Pelage – guard hairs, under-fur, and awn hair Spines – stiff guard hair used for defense (such as in porcupines) Bristles – long hairs usually used in visual signals. (such as a lion's mane) Velli – often called "down fur" which insulates newborn mammals Wool – long, soft and often curly Thermoregulation Hair length is not a factor in thermoregulation: for example, some tropical mammals such as sloths have the same length of fur length as some arctic mammals but with less insulation; and, conversely, other tropical mammals with short hair have the same insulating value as arctic mammals. The denseness of fur can increase an animal's insulation value, and arctic mammals especially have dense fur; for example, the musk ox has guard hairs measuring as well as a dense underfur, which forms an airtight coat, allowing them to survive in temperatures of . Some desert mammals, such as camels, use dense fur to prevent solar heat from reaching their skin, allowing the animal to stay cool; a camel's fur may reach in the summer, but the skin stays at . Aquatic mammals, conversely, trap air in their fur to conserve heat by keeping the skin dry. Coloration Mammalian coats are colored for a variety of reasons, the major selective pressures including camouflage, sexual selection, communication, and thermoregulation. Coloration in both the hair and skin of mammals is mainly determined by the |
in the context of evolutionary theory. Charles Darwin speculated that music may have held an adaptive advantage and functioned as a protolanguage, a view which has spawned several competing theories of music evolution. An alternate view sees music as a by-product of linguistic evolution; a type of "auditory cheesecake" that pleases the senses without providing any adaptive function. This view has been directly countered by numerous music researchers. Cultural effects An individual's culture or ethnicity plays a role in their music cognition, including their preferences, emotional reaction, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features. Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music. Sociological aspects Many ethnographic studies demonstrate that music is a participatory, community-based activity. Music is experienced by individuals in a range of social settings ranging from being alone to attending a large concert, forming a music community, which cannot be understood as a function of individual will or accident; it includes both commercial and non-commercial participants with a shared set of common values. Musical performances take different forms in different cultures and socioeconomic milieus. In Europe and North America, there is often a divide between what types of music are viewed as a "high culture" and "low culture." "High culture" types of music typically include Western art music such as Baroque, Classical, Romantic, and modern-era symphonies, concertos, and solo works, and are typically heard in formal concerts in concert halls and churches, with the audience sitting quietly in seats. Other types of music—including, but not limited to, jazz, blues, soul, and country—are often performed in bars, nightclubs, and theatres, where the audience may be able to drink, dance, and express themselves by cheering. Until the later 20th century, the division between "high" and "low" musical forms was widely accepted as a valid distinction that separated out better quality, more advanced "art music" from the popular styles of music heard in bars and dance halls. However, in the 1980s and 1990s, musicologists studying this perceived divide between "high" and "low" musical genres argued that this distinction is not based on the musical value or quality of the different types of music. Rather, they argued that this distinction was based largely on the socioeconomics standing or social class of the performers or audience of the different types of music. For example, whereas the audience for Classical symphony concerts typically have above-average incomes, the audience for a rap concert in an inner-city area may have below-average incomes. Even though the performers, audience, or venue where non-"art" music is performed may have a lower socioeconomic status, the music that is performed, such as blues, rap, punk, funk, or ska may be very complex and sophisticated. When composers introduce styles of music that break with convention, there can be a strong resistance from academic music experts and popular culture. Late-period Beethoven string quartets, Stravinsky ballet scores, serialism, bebop-era jazz, hip hop, punk rock, and electronica have all been considered non-music by some critics when they were first introduced. Such themes are examined in the sociology of music. The sociological study of music, sometimes called sociomusicology, is often pursued in departments of sociology, media studies, or music, and is closely related to the field of ethnomusicology. Role of women Women have played a major role in music throughout history, as composers, songwriters, instrumental performers, singers, conductors, music scholars, music educators, music critics/music journalists and other musical professions. As well, it describes music movements, events and genres related to women, women's issues and feminism. In the 2010s, while women comprise a significant proportion of popular music and classical music singers, and a significant proportion of songwriters (many of them being singer-songwriters), there are few women record producers, rock critics and rock instrumentalists. Although there have been a huge number of women composers in classical music, from the medieval period to the present day, women composers are significantly underrepresented in the commonly performed classical music repertoire, music history textbooks and music encyclopedias; for example, in the Concise Oxford History of Music, Clara Schumann is one of the few female composers who is mentioned. Women comprise a significant proportion of instrumental soloists in classical music and the percentage of women in orchestras is increasing. A 2015 article on concerto soloists in major Canadian orchestras, however, indicated that 84% of the soloists with the Orchestre Symphonique de Montreal were men. In 2012, women still made up just 6% of the top-ranked Vienna Philharmonic orchestra. Women are less common as instrumental players in popular music genres such as rock and heavy metal, although there have been a number of notable female instrumentalists and all-female bands. Women are particularly underrepresented in extreme metal genres. In the 1960s pop-music scene, "[l]ike most aspects of the...music business, [in the 1960s,] songwriting was a male-dominated field. Though there were plenty of female singers on the radio, women ...were primarily seen as consumers:... Singing was sometimes an acceptable pastime for a girl, but playing an instrument, writing songs, or producing records simply wasn't done." Young women "...were not socialized to see themselves as people who create [music]." Women are also underrepresented in orchestral conducting, music criticism/music journalism, music producing, and sound engineering. While women were discouraged from composing in the 19th century, and there are few women musicologists, women became involved in music education "...to such a degree that women dominated [this field] during the later half of the 19th century and well into the 20th century." According to Jessica Duchen, a music writer for London's The Independent, women musicians in classical music are "...too often judged for their appearances, rather than their talent" and they face pressure "...to look sexy onstage and in photos." Duchen states that while "[t]here are women musicians who refuse to play on their looks,...the ones who do tend to be more materially successful." According to the UK's Radio 3 editor, Edwina Wolstencroft, the music industry has long been open to having women in performance or entertainment roles, but women are much less likely to have positions of authority, such as being the conductor of an orchestra. In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process. One of the most recorded artists is Asha Bhosle, an Indian singer best known as a playback singer in Hindi cinema. Media and technology The music that composers and songwriters make can be heard through several media; the most traditional way is to hear it live, in the presence of the musicians (or as one of the musicians), in an outdoor or indoor space such as an amphitheatre, concert hall, cabaret room, theatre, pub, or coffeehouse. Since the 20th century, live music can also be broadcast over the radio, television or the Internet, or recorded and listened to on a CD player or Mp3 player. Some musical styles focus on producing songs and pieces for a live performance, while others focus on producing a recording that mixes together sounds that were never played "live." Even in essentially live styles such as rock, recording engineers often use the ability to edit, splice and mix to produce recordings that may be considered "better" than the actual live performance. For example, some singers record themselves singing a melody and then record multiple harmony parts using overdubbing, creating a sound that would be impossible to do live. Technology has had an influence on music since prehistoric times, when cave people used simple tools to bore holes into bone flutes 41,000 years ago. Technology continued to influence music throughout the history of music, as it enabled new instruments and music notation reproduction systems to be used, with one of the watershed moments in music notation being the invention of the printing press in the 1400s, which meant music scores no longer had to be hand copied. In the 19th century, music technology led to the development of a more powerful, louder piano and led to the development of new valves brass instruments. In the early 20th century (in the late 1920s), as talking pictures emerged in the early 20th century, with their prerecorded musical tracks, an increasing number of moviehouse orchestra musicians found themselves out of work. During the 1920s, live musical performances by orchestras, pianists, and theater organists were common at first-run theaters. With the coming of the talking motion pictures, those featured performances were largely eliminated. The American Federation of Musicians (AFM) took out newspaper advertisements protesting the replacement of live musicians with mechanical playing devices. One 1929 ad that appeared in the Pittsburgh Press features an image of a can labeled "Canned Music / Big Noise Brand / Guaranteed to Produce No Intellectual or Emotional Reaction Whatever" Since legislation introduced to help protect performers, composers, publishers and producers, including the Audio Home Recording Act of 1992 in the United States, and the 1979 revised Berne Convention for the Protection of Literary and Artistic Works in the United Kingdom, recordings and live performances have also become more accessible through computers, devices and Internet in a form that is commonly known as Music-On-Demand. In many cultures, there is less distinction between performing and listening to music, since virtually everyone is involved in some sort of musical activity, often in a communal setting. In industrialized countries, listening to music through a recorded form, such as sound recording on record or radio became more common than experiencing live performance, roughly in the middle of the 20th century. By the 1980s, watching music videos was a popular way to listen to music, while also seeing the performers. Sometimes, live performances incorporate prerecorded sounds. For example, a disc jockey uses disc records for scratching, and some 20th-century works have a solo for an instrument or voice that is performed along with music that is prerecorded onto a tape. Some pop bands use recorded backing tracks. Computers and many keyboards can be programmed to produce and play Musical Instrument Digital Interface (MIDI) music. Audiences can also become performers by participating in karaoke, an activity of Japanese origin centered on a device that plays voice-eliminated versions of well-known songs. Most karaoke machines also have video screens that show lyrics to songs being performed; performers can follow the lyrics as they sing over the instrumental tracks. Internet The advent of the Internet and widespread high-speed broadband access has transformed the experience of music, partly through the increased ease of access to recordings of music via streaming video and vastly increased choice of music for consumers. Chris Anderson, in his book The Long Tail: Why the Future of Business Is Selling Less of More, suggests that while the traditional economic model of supply and demand describes scarcity, the Internet retail model is based on abundance. Digital storage costs are low, so a company can afford to make its whole recording inventory available online, giving customers as much choice as possible. It has thus become economically viable to offer music recordings that very few people are interested in. Consumers' growing awareness of their increased choice results in a closer association between listening tastes and social identity, and the creation of thousands of niche markets. Another effect of the Internet arose with online communities and social media websites like YouTube and Facebook, a social networking service. These sites make it easier for aspiring singers and amateur bands to distribute videos of their songs, connect with other musicians, and gain audience interest. Professional musicians also use YouTube as a free publisher of promotional material. YouTube users, for example, no longer only download and listen to MP3s, but also actively create their own. According to Don Tapscott and Anthony D. Williams, in their book Wikinomics, there has been a shift from a traditional consumer role to what they call a "prosumer" role, a consumer who both creates content and consumes. Manifestations of this in music include the production of mashes, remixes, and music videos by fans. Business The music industry refers to the businesses connected with the creation and sale of music. It consists of songwriters and composers who create new songs and musical pieces, music producers and sound engineers who record songs and pieces, record labels and publishers that distribute recorded music products and sheet music internationally and that often control the rights to those products. Some music labels are "independent," while others are subsidiaries of larger corporate entities or international media groups. In the 2000s, the increasing popularity of listening to music as digital music files on MP3 players, iPods, or computers, and of trading music on file sharing websites or buying it online in the form of digital files had a major impact on the traditional music business. Many smaller independent CD stores went out of business as music buyers decreased their purchases of CDs, and many labels had lower CD sales. Some companies did well with the change to a digital format, though, such as Apple's iTunes, an online music store that sells digital files of songs over the Internet. Intellectual property laws In spite of some international copyright treaties, determining which music is in the public domain is complicated by the variety of national copyright laws that may be applicable. US copyright law formerly protected printed music published after 1923 for 28 years and with renewal for another 28 years, but the Copyright Act of 1976 made renewal automatic, and the Digital Millennium Copyright Act changed the calculation of the copyright term to 70 years after the death of the creator. Recorded sound falls under mechanical licensing, often covered by a confusing patchwork of state laws; most cover versions are licensed through the Harry Fox Agency. Performance rights may be obtained by either performers or the performance venue; the two major organizations for licensing are BMI and ASCAP. Two online sources for public domain music are IMSLP (International Music Score Library Project) and Choral Public Domain Library (CPDL). Education Non-professional The incorporation of some music or singing training into general education from preschool to post secondary education is common in North America and Europe. Involvement in playing and singing music is thought to teach basic skills such as concentration, counting, listening, and cooperation while also promoting understanding of language, improving the ability to recall information, and creating an environment more conducive to learning in other areas. In elementary schools, children often learn to play instruments such as the recorder, sing in small choirs, and learn about the history of Western art music and traditional music. Some elementary school children also learn about popular music styles. In religious schools, children sing hymns and other religious music. In secondary schools (and less commonly in elementary schools), students may have the opportunity to perform in some types of musical ensembles, such as choirs (a group of singers), marching bands, concert bands, jazz bands, or orchestras. In some school systems, music lessons on how to play instruments may be provided. Some students also take private music lessons after school with a singing teacher or instrument teacher. Amateur musicians typically learn basic musical rudiments (e.g., learning about musical notation for musical scales and rhythms) and beginner- to intermediate-level singing or instrument-playing techniques. At the university level, students in most arts and humanities programs can receive credit for taking a few music courses, which typically take the form of an overview course on the history of music, or a music appreciation course that focuses on listening to music and learning about different musical styles. In addition, most North American and European universities have some types of musical ensembles that students in arts and humanities are able to participate in, such as choirs, marching bands, concert bands, or orchestras. The study of Western art music is increasingly common outside of North America and Europe, such as the Indonesian Institute of the Arts in Yogyakarta, Indonesia, or the classical music programs that are available in Asian countries such as South Korea, Japan, and China. At the same time, Western universities and colleges are widening their curriculum to include music of non-Western cultures, such as the music of Africa or Bali (e.g. Gamelan music). Professional People aiming to become professional musicians, singers, composers, songwriters, music teachers and practitioners of other music-related professions such as music history professors, sound engineers, and so on study in specialized post-secondary programs offered by colleges, universities and music conservatories. Some institutions that train individuals for careers in music offer training in a wide range of professions, as is the case with many of the top U.S. universities, which offer degrees in music performance (including singing and playing instruments), music history, music theory, music composition, music education (for individuals aiming to become elementary or high school music teachers) and, in some cases, conducting. On the other hand, some small colleges may only offer training in a single profession (e.g., sound recording). While most university and conservatory music programs focus on training students in classical music, there are a number of universities and colleges that train musicians for careers as jazz or popular music musicians and composers, with notable U.S. examples including the Manhattan School of Music and the Berklee College of Music. Two important schools in Canada which offer professional jazz training are McGill University and Humber College. Individuals aiming at careers in some types of music, such as heavy metal music, country music or blues are less likely to become professionals by completing degrees or diplomas in colleges or universities. Instead, they typically learn about their style of music by singing or playing in many bands (often beginning in amateur bands, cover bands and tribute bands), studying recordings available on CD, DVD and the Internet and working with already-established professionals in their style of music, either through informal mentoring or regular music lessons. Since the 2000s, the increasing popularity and availability of Internet forums and YouTube "how-to" videos have enabled many singers and musicians from metal, blues and similar genres to improve their skills. Many pop, rock and country singers train informally with vocal coaches and singing teachers. Undergraduate Undergraduate university degrees in music, including the Bachelor of Music, the Bachelor of Music Education, and the Bachelor of Arts (with a major in music) typically take about four years to complete. These degrees provide students with a grounding in music theory and music history, and many students also study an instrument or learn singing technique as part of their program. Graduates of undergraduate music programs can seek employment or go on to further study in music graduate programs. Bachelor's degree graduates are also eligible to apply to some graduate programs and professional schools outside of music (e.g., public administration, business administration, library science, and, in some jurisdictions, teacher's college, law school or medical school). Graduate Graduate music degrees include the Master of Music, the Master of Arts (in musicology, music theory or another music field), the Doctor of Philosophy (Ph.D.) (e.g., in musicology or music theory), and more recently, the Doctor of Musical Arts, or DMA. The Master of Music degree, which takes one to two years to complete, is typically awarded to students studying the performance of an instrument, education, voice (singing) or composition. The Master of Arts degree, which takes one to two years to complete and often requires a thesis, is typically awarded to students studying musicology, music history, music theory or ethnomusicology. The PhD, which is required for students who want to work as university professors in musicology, music history, or music theory, takes three to five years of study after the master's degree, during which time the student will complete advanced courses and undertake research for a dissertation. The DMA is a relatively new degree that was created to provide a credential for professional performers or composers that want to work as university professors in musical performance or composition. The DMA takes three to five years after a master's degree, and includes advanced courses, projects, and performances. In Medieval times, the study of music was one of the Quadrivium of the seven Liberal Arts and considered vital to higher learning. Within the quantitative Quadrivium, music, or more accurately harmonics, was the study of rational proportions. Musicology Musicology, the academic study of the subject of music, is studied in universities and music conservatories. The earliest definitions from the 19th century defined three sub-disciplines of musicology: systematic musicology, historical musicology, and comparative musicology or ethnomusicology. In 2010-era scholarship, one is more likely to encounter a division of the discipline into music theory, music history, and ethnomusicology. Research in musicology has often been enriched by cross-disciplinary work, for example in the field of psychoacoustics. The study of music of non-Western cultures, and the cultural study of music, is called ethnomusicology. Students can pursue the undergraduate study of musicology, ethnomusicology, music history, and music theory through several different types of degrees, including bachelor's degrees, master's degrees and PhD degrees. Music theory Music theory is the study of music, generally in a highly technical manner outside of other disciplines. More broadly it refers to any study of music, usually related in some form with compositional concerns, and may include mathematics, physics, and anthropology. What is most commonly taught in beginning music theory classes are guidelines to write in the style of the common practice period, or tonal music. Theory, even of music of the common practice period, may take many other forms. Musical set theory is the application of mathematical set theory to music, first applied to atonal music. Speculative music theory, contrasted with analytic music theory, is devoted to the analysis and synthesis of music materials, for example tuning systems, generally as preparation for composition. Zoomusicology Zoomusicology is the study of the music of non-human animals, or the musical aspects of sounds produced by non-human animals. As George Herzog (1941) asked, "do animals have music?" François-Bernard Mâche's Musique, mythe, nature, ou les Dauphins d'Arion (1983), a study of "ornitho-musicology" using a technique of Nicolas Ruwet's Langage, musique, poésie (1972) paradigmatic segmentation analysis, shows that bird songs are organised according to a repetition-transformation principle. Jean-Jacques Nattiez (1990), argues that "in the | through the manipulation of pitch (such as inflection, vibrato, slides etc.), volume (dynamics, accent, tremolo etc.), duration (tempo fluctuations, rhythmic changes, changing note duration such as with legato and staccato, etc.), timbre (e.g. changing vocal timbre from a light to a resonant voice) and sometimes even texture (e.g. doubling the bass note for a richer effect in a piano piece). Expression therefore can be seen as a manipulation of all elements in order to convey "an indication of mood, spirit, character etc." and as such cannot be included as a unique perceptual element of music, although it can be considered an important rudimentary element of music. Form In music, form describes the overall structure or plan of a song or piece of music, and it describes the layout of a composition as divided into sections. In the early 20th century, Tin Pan Alley songs and Broadway musical songs were often in AABA 32 bar form, in which the A sections repeated the same eight bar melody (with variation) and the B section provided a contrasting melody or harmony for eight bars. From the 1960s onward, Western pop and rock songs are often in verse-chorus form, which comprises a sequence of verse and chorus ("refrain") sections, with new lyrics for most verses and repeating lyrics for the choruses. Popular music often makes use of strophic form, sometimes in conjunction with the twelve bar blues. In the tenth edition of The Oxford Companion to Music, Percy Scholes defines musical form as "a series of strategies designed to find a successful mean between the opposite extremes of unrelieved repetition and unrelieved alteration." Examples of common forms of Western music include the fugue, the invention, sonata-allegro, canon, strophic, theme and variations, and rondo. Scholes states that European classical music had only six stand-alone forms: simple binary, simple ternary, compound binary, rondo, air with variations, and fugue (although musicologist Alfred Mann emphasized that the fugue is primarily a method of composition that has sometimes taken on certain structural conventions.) Where a piece cannot readily be broken down into sectional units (though it might borrow some form from a poem, story or programme), it is said to be through-composed. Such is often the case with a fantasia, prelude, rhapsody, etude (or study), symphonic poem, Bagatelle, impromptu, etc. Professor Charles Keil classified forms and formal detail as "sectional, developmental, or variational." Analysis of styles Some styles of music place an emphasis on certain of these fundamentals, while others place less emphasis on certain elements. To give one example, while Bebop-era jazz makes use of very complex chords, including altered dominants and challenging chord progressions, with chords changing two or more times per bar and keys changing several times in a tune, funk places most of its emphasis on rhythm and groove, with entire songs based on a vamp on a single chord. While Romantic era classical music from the mid- to late-1800s makes great use of dramatic changes of dynamics, from whispering pianissimo sections to thunderous fortissimo sections, some entire Baroque dance suites for harpsichord from the early 1700s may use a single dynamic. To give another example, while some art music pieces, such as symphonies are very long, some pop songs are just a few minutes long. History Prehistory Prehistoric music can only be theorized based on findings from paleolithic archaeology sites. Flutes are often discovered, carved from bones in which lateral holes have been pierced; these are thought to have been blown at one end like the Japanese shakuhachi. The Divje Babe flute, carved from a cave bear femur, is thought to be at least 40,000 years old, though there is considerable debate surrounding whether it is truly a musical instrument or an object formed by animals. Instruments such as the seven-holed flute and various types of stringed instruments, such as the Ravanahatha, have been recovered from the Indus Valley Civilization archaeological sites. India has one of the oldest musical traditions in the world—references to Indian classical music (marga) are found in the Vedas, ancient scriptures of the Hindu tradition. The earliest and largest collection of prehistoric musical instruments was found in China and dates back to between 7000 and 6600 BC. The "Hurrian Hymn to Nikkal", found on clay tablets that date back to approximately 1400 BC, is the oldest surviving notated work of music. Ancient Egypt The earliest material and representational evidence of Egyptian musical instruments dates to the Predynastic period, but the evidence is more securely attested in the Old Kingdom when harps, flutes and double clarinets were played. Percussion instruments, lyres and lutes were added to orchestras by the Middle Kingdom. Cymbals frequently accompanied music and dance, much as they still do in Egypt today. Egyptian folk music, including the traditional Sufi dhikr rituals, are the closest contemporary music genre to ancient Egyptian music, having preserved many of its features, rhythms and instruments. Asian cultures Asian music covers a vast swath of music cultures surveyed in the articles on Arabia, Central Asia, East Asia, South Asia, and Southeast Asia. Several have traditions reaching into antiquity. Indian classical music is one of the oldest musical traditions in the world. The Indus Valley civilization has sculptures that show dance and old musical instruments, like the seven holed flute. Various types of stringed instruments and drums have been recovered from Harappa and Mohenjo Daro by excavations carried out by Sir Mortimer Wheeler. The Rigveda has elements of present Indian music, with a musical notation to denote the metre and the mode of chanting. Indian classical music (marga) is monophonic, and based on a single melody line or raga rhythmically organized through talas. Silappadhikaram by Ilango Adigal provides information about how new scales can be formed by modal shifting of the tonic from an existing scale. Present day Hindi music was influenced by Persian traditional music and Afghan Mughals. Carnatic music, popular in the southern states, is largely devotional; the majority of the songs are addressed to the Hindu deities. There are also many songs emphasising love and other social issues. Indonesian music has been formed since the Bronze Age culture migrated to the Indonesian archipelago in the 2nd to 3rd centuries BC. Indonesian traditional music often uses percussion instruments, especially kendang and gongs. Some of them developed elaborate and distinctive musical instruments, such as the sasando stringed instrument on the island of Rote, the Sundanese angklung, and the complex and sophisticated Javanese and Balinese gamelan orchestras. Indonesia is the home of gong chime, gong chime is a general term for a set of small, high pitched pot gongs. Gongs are usually placed in order of note, with the boss up on a string held in a low wooden frame. The most popular and famous form of Indonesian music is probably gamelan, an ensemble of tuned percussion instruments that include metallophones, drums, gongs and spike fiddles along with bamboo suling. Chinese classical music, the traditional art or court music of China, has a history stretching over around three thousand years. It has its own unique systems of musical notation, as well as musical tuning and pitch, musical instruments and styles or musical genres. Chinese music is pentatonic-diatonic, having a scale of twelve notes to an octave (5 + 7 = 12) as does European-influenced music. Ancient Greece Music was an important part of social and cultural life in ancient Greece, in fact it was one of the main subjects taught to children. Musical education was considered to be important for the development of an individual's soul. Musicians and singers played a prominent role in Greek theater and the ones who received a musical education were seen as nobles and in perfect harmony (as we can read in the Republic, Plato) Mixed-gender choruses performed for entertainment, celebration, and spiritual ceremonies. Holy Ancient Greek music will be considered an example of perfection and purity. Instruments included the double-reed aulos and a plucked string instrument, the lyre, principally the special kind called a kithara. Music was an important part of education, and boys were taught music starting at age six. Greek musical literacy created a flowering of music development. Greek music theory included the Greek musical modes, that eventually became the basis for Western religious and classical music. Later, influences from the Roman Empire, Eastern Europe, and the Byzantine Empire changed Greek music. The Seikilos epitaph is the oldest surviving example of a complete musical composition, including musical notation, from anywhere in the world. The oldest surviving work written on the subject of music theory is Harmonika Stoicheia by Aristoxenus. Western classical Middle Ages The medieval era (476 to 1400), which took place during the Middle Ages, started with the introduction of monophonic (single melodic line) chanting into Roman Catholic Church services. Musical notation was used since Ancient times in Greek culture, but in the Middle Ages, notation was first introduced by the Catholic church so that the chant melodies could be written down, to facilitate the use of the same melodies for religious music across the entire Catholic empire. The only European Medieval repertory that has been found in written form from before 800 is the monophonic liturgical plainsong chant of the Roman Catholic Church, the central tradition of which was called Gregorian chant. Alongside these traditions of sacred and church music there existed a vibrant tradition of secular song (non-religious songs). Examples of composers from this period are Léonin, Pérotin, Guillaume de Machaut, and Walther von der Vogelweide. Renaissance Renaissance music (c. 1400 to 1600) was more focused on secular (non-religious) themes, such as courtly love. Around 1450, the printing press was invented, which made printed sheet music much less expensive and easier to mass-produce (prior to the invention of the printing press, all notated music was hand-copied). The increased availability of sheet music helped to spread musical styles more quickly and across a larger area. Musicians and singers often worked for the church, courts and towns. Church choirs grew in size, and the church remained an important patron of music. By the middle of the 15th century, composers wrote richly polyphonic sacred music, in which different melody lines were interwoven simultaneously. Prominent composers from this era include Guillaume Dufay, Giovanni Pierluigi da Palestrina, Thomas Morley, and Orlande de Lassus. As musical activity shifted from the church to the aristocratic courts, kings, queens and princes competed for the finest composers. Many leading important composers came from the Netherlands, Belgium, and northern France. They are called the Franco-Flemish composers. They held important positions throughout Europe, especially in Italy. Other countries with vibrant musical activity included Germany, England, and Spain. Baroque The Baroque era of music took place from 1600 to 1750, as the Baroque artistic style flourished across Europe; and during this time, music expanded in its range and complexity. Baroque music began when the first operas (dramatic solo vocal music accompanied by orchestra) were written. During the Baroque era, polyphonic contrapuntal music, in which multiple, simultaneous independent melody lines were used, remained important (counterpoint was important in the vocal music of the Medieval era). German Baroque composers wrote for small ensembles including strings, brass, and woodwinds, as well as for choirs and keyboard instruments such as pipe organ, harpsichord, and clavichord. During this period several major music forms were defined that lasted into later periods when they were expanded and evolved further, including the fugue, the invention, the sonata, and the concerto. The late Baroque style was polyphonically complex and richly ornamented. Important composers from the Baroque era include Johann Sebastian Bach (Cello suites), George Frideric Handel (Messiah), Georg Philipp Telemann and Antonio Lucio Vivaldi (The Four Seasons). Classicism The music of the Classical period (1730 to 1820) aimed to imitate what were seen as the key elements of the art and philosophy of Ancient Greece and Rome: the ideals of balance, proportion and disciplined expression. (Note: the music from the Classical period should not be confused with Classical music in general, a term which refers to Western art music from the 5th century to the 2000s, which includes the Classical period as one of a number of periods). Music from the Classical period has a lighter, clearer and considerably simpler texture than the Baroque music which preceded it. The main style was homophony, where a prominent melody and a subordinate chordal accompaniment part are clearly distinct. Classical instrumental melodies tended to be almost voicelike and singable. New genres were developed, and the fortepiano, the forerunner to the modern piano, replaced the Baroque era harpsichord and pipe organ as the main keyboard instrument (though pipe organ continued to be used in sacred music, such as Masses). Importance was given to instrumental music. It was dominated by further development of musical forms initially defined in the Baroque period: the sonata, the concerto, and the symphony. Others main kinds were the trio, string quartet, serenade and divertimento. The sonata was the most important and developed form. Although Baroque composers also wrote sonatas, the Classical style of sonata is completely distinct. All of the main instrumental forms of the Classical era, from string quartets to symphonies and concertos, were based on the structure of the sonata. The instruments used chamber music and orchestra became more standardized. In place of the basso continuo group of the Baroque era, which consisted of harpsichord, organ or lute along with a number of bass instruments selected at the discretion of the group leader (e.g., viol, cello, theorbo, serpent), Classical chamber groups used specified, standardized instruments (e.g., a string quartet would be performed by two violins, a viola and a cello). The Baroque era improvised chord-playing of the continuo keyboardist or lute player was gradually phased out between 1750 and 1800. One of the most important changes made in the Classical period was the development of public concerts. The aristocracy still played a significant role in the sponsorship of concerts and compositions, but it was now possible for composers to survive without being permanent employees of queens or princes. The increasing popularity of classical music led to a growth in the number and types of orchestras. The expansion of orchestral concerts necessitated the building of large public performance spaces. Symphonic music including symphonies, musical accompaniment to ballet and mixed vocal/instrumental genres such as opera and oratorio became more popular. The best known composers of Classicism are Carl Philipp Emanuel Bach, Christoph Willibald Gluck, Johann Christian Bach, Joseph Haydn, Wolfgang Amadeus Mozart, Ludwig van Beethoven and Franz Schubert. Beethoven and Schubert are also considered to be composers in the later part of the Classical era, as it began to move towards Romanticism. Romanticism Romantic music (c. 1810 to 1900) from the 19th century had many elements in common with the Romantic styles in literature and painting of the era. Romanticism was an artistic, literary, and intellectual movement was characterized by its emphasis on emotion and individualism as well as glorification of all the past and nature. Romantic music expanded beyond the rigid styles and forms of the Classical era into more passionate, dramatic expressive pieces and songs. Romantic composers such as Wagner and Brahms attempted to increase emotional expression and power in their music to describe deeper truths or human feelings. With symphonic tone poems, composers tried to tell stories and evoke images or landscapes using instrumental music. Some composers promoted nationalistic pride with patriotic orchestral music inspired by folk music. The emotional and expressive qualities of music came to take precedence over tradition. Romantic composers grew in idiosyncrasy, and went further in the syncretism of exploring different art-forms in a musical context, (such as literature), history (historical figures and legends), or nature itself. Romantic love or longing was a prevalent theme in many works composed during this period. In some cases, the formal structures from the classical period continued to be used (e.g., the sonata form used in string quartets and symphonies), but these forms were expanded and altered. In many cases, new approaches were explored for existing genres, forms, and functions. Also, new forms were created that were deemed better suited to the new subject matter. Composers continued to develop opera and ballet music, exploring new styles and themes. In the years after 1800, the music developed by Ludwig van Beethoven and Franz Schubert introduced a more dramatic, expressive style. In Beethoven's case, short motifs, developed organically, came to replace melody as the most significant compositional unit (an example is the distinctive four note figure used in his Fifth Symphony). Later Romantic composers such as Pyotr Ilyich Tchaikovsky, Antonín Dvořák, and Gustav Mahler used more unusual chords and more dissonance to create dramatic tension. They generated complex and often much longer musical works. During the late Romantic period, composers explored dramatic chromatic alterations of tonality, such as extended chords and altered chords, which created new sound "colours". The late 19th century saw a dramatic expansion in the size of the orchestra, and the industrial revolution helped to create better instruments, creating a more powerful sound. Public concerts became an important part of well-to-do urban society. It also saw a new diversity in theatre music, including operetta, and musical comedy and other forms of musical theatre. 20th and 21st century In the 19th century, one of the key ways that new compositions became known to the public was by the sales of sheet music, which middle class amateur music lovers would perform at home on their piano or other common instruments, such as violin. With 20th-century music, the invention of new electric technologies such as radio broadcasting and the mass market availability of gramophone records meant that sound recordings of songs and pieces heard by listeners (either on the radio or on their record player) became the main way to learn about new songs and pieces. There was a vast increase in music listening as the radio gained popularity and phonographs were used to replay and distribute music, because whereas in the 19th century, the focus on sheet music restricted access to new music to the middle class and upper-class people who could read music and who owned pianos and instruments, in the 20th century, anyone with a radio or record player could hear operas, symphonies and big bands right in their own living room. This allowed lower-income people, who would never be able to afford an opera or symphony concert ticket to hear this music. It also meant that people could hear music from different parts of the country, or even different parts of the world, even if they could not afford to travel to these locations. This helped to spread musical styles. The focus of art music in the 20th century was characterized by exploration of new rhythms, styles, and sounds. The horrors of World War I influenced many of the arts, including music, and some composers began exploring darker, harsher sounds. Traditional music styles such as jazz and folk music were used by composers as a source of ideas for classical music. Igor Stravinsky, Arnold Schoenberg, and John Cage were all influential composers in 20th-century art music. The invention of sound recording and the ability to edit music gave rise to new subgenre of classical music, including the acousmatic and Musique concrète schools of electronic composition. Sound recording was also a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The introduction of the multitrack recording system had a major influence on rock music, because it could do much more than record a band's performance. Using a multitrack system, a band and their music producer could overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance. Jazz evolved and became an important genre of music over the course of the 20th century, and during the second half of that century, rock music did the same. Jazz is an American musical artform that originated in the beginning of the 20th century in African American communities in the Southern United States from a confluence of African and European music traditions. The style's West African pedigree is evident in its use of blue notes, improvisation, polyrhythms, syncopation, and the swung note. Rock music is a genre of popular music that developed in the 1960s from 1950s rock and roll, rockabilly, blues, and country music. The sound of rock often revolves around the electric guitar or acoustic guitar, and it uses a strong back beat laid down by a rhythm section. Along with the guitar or keyboards, saxophone and blues-style harmonica are used as soloing instruments. In its "purest form", it "has three chords, a strong, insistent back beat, and a catchy melody". The traditional rhythm section for popular music is rhythm guitar, electric bass guitar, drums. Some bands also have keyboard instruments such as organ, piano, or, since the 1970s, analog synthesizers. In the 1980s, pop musicians began using digital synthesizers, such as the DX-7 synthesizer, electronic drum machines such as the TR-808 and synth bass devices (such as the TB-303) or synth bass keyboards. In the 1990s, an increasingly large range of computerized hardware musical devices and instruments and software (e.g., digital audio workstations) were used. In the 2020s, soft synths and computer music apps make it possible for bedroom producers to create and record some types of music, such as electronic dance music in their own home, adding sampled and digital instruments and editing the recording digitally. In the 1990s, some bands in genres such as nu metal began including DJs in their bands. DJs create music by manipulating recorded music on record players or CD players, using a DJ mixer. Innovation in music technology continued into the 21st century, including the development of isomorphic keyboards and Dynamic Tonality. Performance Performance is the physical expression of music, which occurs when a song is sung or when a piano piece, electric guitar melody, symphony, drum beat or other musical part is played by musicians. In classical music, a musical work is written in music notation by a composer and then it is performed once the composer is satisfied with its structure and instrumentation. However, as it gets performed, the interpretation of a song or piece can evolve and change. In classical music, instrumental performers, singers or conductors may gradually make changes to the phrasing or tempo of a piece. In popular and traditional music, the performers have a lot more freedom to make changes to the form of a song or piece. As such, in popular and traditional music styles, even when a band plays a cover song, they can make changes to it such as adding a guitar solo to or inserting an introduction. A performance can either be planned out and rehearsed (practiced)—which is the norm in classical music, with jazz big bands and many popular music styles–or improvised over a chord progression (a sequence of chords), which is the norm in small jazz and blues groups. Rehearsals of orchestras, concert bands and choirs are led by a conductor. Rock, blues and jazz bands are usually led by the bandleader. A rehearsal is a structured repetition of a song or piece by the performers until it can be sung or played correctly and, if it is a song or piece for more than one musician, until the parts are together from a rhythmic and tuning perspective. Improvisation is the creation of a musical idea–a melody or other musical line–created on the spot, often based on scales or pre-existing melodic riffs. Many cultures have strong traditions of solo performance (in which one singer or instrumentalist performs), such as in Indian classical music, and in the Western art-music tradition. Other cultures, such as in Bali, include strong traditions of group performance. All cultures include a mixture of both, and performance may range from improvised solo playing to highly planned and organised performances such as the modern classical concert, religious processions, classical music festivals or music competitions. Chamber music, which is music for a small ensemble with only a few of each type of instrument, is often seen as more intimate than large symphonic works. Oral and aural tradition Many types of music, such as traditional blues and folk music were not written down in sheet music; instead, they were originally preserved in the memory of performers, and the songs were handed down orally, from one musician or singer to another, or aurally, in which a performer learns a song "by ear". When the composer of a song or piece is no longer known, this music is often classified as "traditional" or as a "folk song". Different musical traditions have different attitudes towards how and where to make changes to the original source material, from quite strict, to those that demand improvisation or modification to the music. A culture's history and stories may also be passed on by ear through song. Ornamentation In music, an ornament consists of added notes that provide decoration to a melody, bassline or other musical part. The detail included explicitly in the music notation varies between genres and historical periods. In general, art music notation from the 17th through the 19th centuries required performers to have a great deal of contextual knowledge about performing styles. For example, in the 17th and 18th centuries, music notated for solo performers typically indicated a simple, unadorned melody. Performers were expected to know how to add stylistically appropriate ornaments to add interest to the music, such as trills and turns. Different styles of music use different ornaments. A Baroque flute player might add mordents, which are short notes that are played before the main melody note, either above or below the main melody note. A blues guitarist playing electric guitar might use string bending to add expression; a heavy metal guitar player might use hammer-ons and pull-offs. In the 19th century, art music for solo performers may give a general instruction such as to perform the music expressively, without describing in detail how the performer should do this. The performer was expected to know how to use tempo changes, accentuation, and pauses (among other devices) to obtain this "expressive" performance style. In the 20th century, art music notation often became more explicit and used a range of markings and annotations to indicate to performers how they should play or sing the piece. In popular music and traditional music styles, performers are expected to know what types of ornaments are stylistically appropriate for a given song or piece, and performers typically add them in an improvised fashion. One exception is note-for-note solos, in which some players precisely recreate a famous version of a solo, such as a guitar solo. Philosophy and aesthetics Philosophy of music is a subfield of philosophy. The philosophy of music is the study of fundamental questions regarding music. The philosophical study of music has many connections with philosophical questions in metaphysics and aesthetics. Some basic questions in the philosophy of music are : What is the definition of music? (What are the necessary and sufficient conditions for classifying something as music?) What is the relationship between music and mind? What does music history reveal to us about the world? What is the connection between music and emotions? What is meaning in relation to music? In ancient times, such as with the Ancient Greeks, the aesthetics of music explored the mathematical and cosmological dimensions of rhythmic and harmonic organization. In the 18th century, focus shifted to the experience of hearing music, and thus to questions about its beauty and human enjoyment (plaisir and jouissance) of music. The origin of this philosophic shift is sometimes attributed to Alexander Gottlieb Baumgarten in the 18th century, followed by Immanuel Kant. Through their writing, the ancient term 'aesthetics', meaning sensory perception, received its present-day connotation. In the 2000s, philosophers have tended to emphasize issues besides beauty and enjoyment. For example, music's capacity to express emotion has been a central issue. In the 20th century, important contributions were made by Peter Kivy, Jerrold Levinson, Roger Scruton, and Stephen Davies. However, many musicians, music critics, and other non-philosophers have contributed to the aesthetics of music. In the 19th century, a significant debate arose between Eduard Hanslick, a music critic and musicologist, and composer Richard Wagner regarding whether music can express meaning. Harry Partch and some other musicologists, such as Kyle Gann, have studied and tried to popularize microtonal music and the usage of alternate musical scales. Also many modern composers like La Monte Young, Rhys Chatham and Glenn Branca paid much attention to a scale called just intonation. It is often thought that music has the ability to affect our emotions, intellect, and psychology; it can assuage our loneliness or incite our passions. The philosopher Plato suggests in The Republic that music has a direct effect on the soul. Therefore, he proposes that in the ideal regime music would be closely regulated by the state (Book VII). In Ancient China, the philosopher Confucius believed that music and rituals or rites are interconnected and harmonious with nature; he stated that music was the harmonization of heaven and earth, while the order was brought by the rites order, making them extremely crucial functions in society. There has been a strong tendency in the aesthetics of music to emphasize the paramount importance of compositional structure; however, other issues concerning the aesthetics of music include lyricism, harmony, hypnotism, emotiveness, temporal dynamics, resonance, playfulness, and color (see also musical development). Psychology Modern music psychology aims to explain and understand musical behavior and experience. Research in this field and its subfields are primarily empirical; their knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. In addition to its focus on fundamental perceptions and cognitive processes, music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human aptitude, skill, intelligence, creativity, and social behavior. Neuroscience Cognitive neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. The field is distinguished by its reliance on direct observations of the brain, using such techniques as functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), magnetoencephalography (MEG), electroencephalography (EEG), and positron emission tomography (PET). Cognitive musicology Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition. The use of computer models provides an exacting, interactive medium in which to formulate and test theories and has roots in artificial intelligence and cognitive science. This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs. This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated. Psychoacoustics Psychoacoustics is the scientific study of sound perception. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including speech and music). It can be further categorized as a branch of psychophysics. Evolutionary musicology Evolutionary musicology concerns the "origins of music, the question of animal song, selection pressures underlying music evolution", and "music evolution and human evolution". It seeks to understand music perception and activity in the context of evolutionary theory. Charles Darwin speculated that music may have held an adaptive advantage and functioned as a protolanguage, a view which has spawned several competing theories of music evolution. An alternate view sees music as a by-product of linguistic evolution; a type of "auditory cheesecake" that pleases the senses without providing any adaptive function. This view has been directly countered by numerous music researchers. Cultural effects An individual's culture or ethnicity plays a role in their music cognition, including their preferences, emotional reaction, and musical memory. Musical preferences are biased toward culturally familiar musical traditions beginning in infancy, and adults' classification of the emotion of a musical piece depends on both culturally specific and universal structural features. Additionally, individuals' musical memory abilities are greater for culturally familiar music than for culturally unfamiliar music. Sociological aspects Many ethnographic studies demonstrate that music is a participatory, community-based activity. Music is experienced by individuals in a range of social settings ranging from being alone to attending a large concert, forming a music community, which cannot be understood as a function of individual will or accident; it includes both commercial and non-commercial participants with a shared set of common values. Musical performances take different forms in different cultures and socioeconomic milieus. In Europe and North America, there is often a divide between what types of music are viewed as a "high culture" and "low culture." "High culture" types of music typically include Western art music such as Baroque, Classical, Romantic, and modern-era symphonies, concertos, and solo works, and are typically heard in formal concerts in concert halls and churches, with the audience sitting quietly in seats. Other types of music—including, but not limited to, jazz, blues, soul, and country—are often performed in bars, nightclubs, and theatres, where the audience may be able to drink, dance, and express themselves by cheering. Until the later 20th century, the division between "high" and "low" musical forms was widely accepted as a valid distinction that separated out better quality, more advanced "art music" from the popular styles of music heard in bars and dance halls. However, in the 1980s and 1990s, musicologists studying this perceived divide between "high" and "low" musical genres argued that this distinction is not based on the musical value or quality of the different types of music. Rather, they argued that this distinction was based largely on the socioeconomics standing or social class of the performers or audience of the different types of music. For example, whereas the audience for Classical symphony concerts typically have above-average incomes, the audience for a rap concert in an inner-city area may have below-average incomes. Even though the performers, audience, or venue where non-"art" music is performed may have a lower socioeconomic status, the music that is performed, such as blues, rap, punk, funk, or ska may be very complex and sophisticated. When composers introduce styles of music that break with convention, there can be a strong resistance from academic music experts and popular culture. Late-period Beethoven string quartets, Stravinsky ballet scores, serialism, bebop-era jazz, hip hop, punk rock, and electronica have all been considered non-music by some critics when they were first introduced. Such themes are examined in the sociology of music. The sociological study of music, sometimes called sociomusicology, is often pursued in departments of sociology, media studies, or music, and is closely related to the field of ethnomusicology. Role of women Women have played a major role in music throughout history, as composers, songwriters, instrumental performers, singers, conductors, music scholars, music educators, music critics/music journalists and other musical professions. As well, it describes music movements, events and genres related to women, women's issues and feminism. In the 2010s, while women comprise a significant proportion of popular music and classical music singers, and a significant proportion of songwriters (many of them being singer-songwriters), there are few women record producers, rock critics and rock instrumentalists. Although there have been a huge number of women composers in classical music, from the medieval period to the present day, women composers are significantly underrepresented in the commonly performed classical music repertoire, music history textbooks and music encyclopedias; for example, in the Concise Oxford History of Music, Clara Schumann is one of the few female composers who is mentioned. Women comprise a significant proportion of instrumental soloists in classical music and the percentage of women in orchestras is increasing. A 2015 article on concerto soloists in major Canadian orchestras, however, indicated that 84% of the soloists with the Orchestre Symphonique de Montreal were men. In 2012, women still made up just 6% of the top-ranked Vienna Philharmonic orchestra. Women are less common as instrumental players in popular music genres such as rock and heavy metal, although there have been a number of notable female instrumentalists and all-female bands. Women are particularly underrepresented in extreme metal genres. In the 1960s pop-music scene, "[l]ike most aspects of the...music business, [in the 1960s,] songwriting was a male-dominated field. Though there were plenty of female singers on the radio, women ...were primarily seen as consumers:... Singing was sometimes an acceptable pastime for a girl, but playing an instrument, writing songs, or producing records simply wasn't done." Young women "...were not socialized to see themselves as people who create [music]." Women are also underrepresented in orchestral conducting, music criticism/music journalism, music producing, and sound engineering. While women were discouraged from composing in the 19th century, and there are few women musicologists, women became involved in music education "...to such a degree that women dominated [this field] during the later half of the 19th century and well into the 20th century." According to Jessica Duchen, a music writer for London's The Independent, women musicians in classical music are "...too often judged for their appearances, rather than their talent" and they face pressure "...to look sexy onstage and in photos." Duchen states that while "[t]here are women musicians who refuse to play on their looks,...the ones who do tend to be more materially successful." According to the UK's Radio 3 editor, Edwina Wolstencroft, the music industry has long been open to having women in performance or entertainment roles, but women are much less likely to have positions of authority, such as being the conductor of an orchestra. In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process. One of the most recorded artists is Asha Bhosle, an Indian singer best known as a playback singer in Hindi cinema. Media and technology The music that composers and songwriters make can be heard through several media; the most traditional way is to hear it live, in the presence of the musicians (or as one of the musicians), in an outdoor or indoor space such as an amphitheatre, concert hall, cabaret room, theatre, pub, or coffeehouse. Since the 20th century, live music can also be broadcast over the radio, television or the Internet, or recorded and listened to on a CD player or Mp3 player. Some musical styles focus on producing songs and pieces for a live performance, while others focus on producing a recording that mixes together sounds that were never played "live." Even in essentially live styles such as rock, recording engineers often use the ability to edit, splice and mix to produce recordings that may be considered "better" than the actual live performance. For example, some singers record themselves singing a melody and then record multiple harmony parts using overdubbing, creating a sound that would be impossible to do live. Technology has had an influence on music since prehistoric times, when cave people used simple tools to bore holes into bone flutes 41,000 years ago. Technology continued to influence music throughout the history of music, as it enabled new instruments and music notation reproduction systems to be used, with one of the watershed moments in music notation being the invention of the printing press in the 1400s, which meant music scores no longer had to be hand copied. In the 19th century, music technology led to the development of a more powerful, louder piano and led to the development of new valves brass instruments. In the early 20th century (in the late 1920s), as talking pictures emerged in the early 20th century, with their prerecorded musical tracks, an increasing number of moviehouse orchestra musicians found themselves out of work. During the 1920s, live musical performances by orchestras, pianists, and theater organists were common at first-run theaters. With the coming of the talking motion pictures, those featured performances were largely eliminated. The American Federation of Musicians (AFM) took out newspaper advertisements protesting the replacement of live musicians with mechanical playing devices. One 1929 ad that appeared in the Pittsburgh Press features an image of a can labeled "Canned Music / Big Noise Brand / Guaranteed to Produce No Intellectual or Emotional Reaction Whatever" Since legislation introduced to help protect performers, composers, publishers and producers, including the Audio Home Recording Act of 1992 in the United States, and the 1979 revised Berne Convention for the Protection of Literary and Artistic Works in the United Kingdom, recordings and live performances have also become more accessible through computers, devices and Internet in a form that is commonly known as Music-On-Demand. In many cultures, there is less distinction between performing and listening to music, since virtually everyone is involved in some sort of musical activity, often in a communal setting. In industrialized countries, listening to music through a recorded form, such as sound recording on record or radio became more common than experiencing live performance, roughly in the middle of the 20th century. By the 1980s, watching music videos was a popular way to listen to music, while also seeing the performers. Sometimes, live performances incorporate prerecorded sounds. For example, a disc jockey uses disc records for scratching, and some 20th-century works have a solo for an instrument or voice that is performed along with music that is prerecorded onto a tape. Some pop bands use recorded backing tracks. Computers and many keyboards can be programmed to produce and play Musical Instrument Digital Interface (MIDI) music. Audiences can also become performers by participating in karaoke, an activity of Japanese origin centered on a device that plays voice-eliminated versions of well-known songs. Most karaoke machines also have video screens that show lyrics to songs being performed; performers can follow the lyrics as they sing over the instrumental tracks. Internet The advent of the Internet and widespread high-speed broadband access has transformed the experience of music, partly through the increased ease of access to recordings of music via streaming video and vastly increased choice of music for consumers. Chris Anderson, in his book The Long Tail: Why the Future of Business Is Selling Less of More, suggests that while the traditional economic model of supply and demand describes scarcity, the Internet retail model is based on abundance. Digital storage costs are low, so a company can afford to make its whole recording inventory available online, giving customers as much choice as possible. It has thus become economically viable to offer music recordings that very few people are |
mode, a software configuration where text input is processed outside of an application Immediate mode (computer graphics), a graphic library where commands produce direct rendering on the display Data types in some programming languages (e.g., EL/1) Block cipher mode of operation, in cryptography Modes (Unix), permissions given to users and groups to access files and folders on Unix hosts MODE (command), a DOS and Windows command line utility for the configuration of devices and the console Asynchronous transfer mode, a method of digital communication Popular culture and business Mode Records, a record label Mode.com and Mode Media MODE Magazine, an out-of-print U.S. women's fashion magazine featuring plus-size clothing shot inVogue-like aesthetic Mode magazine, a fictional fashion magazine which is the setting for the ABC series Ugly Betty Fashion Explosive Mode, a 1998 album by San Quinn and Messy Marv Mode series, a quartet of novels by Piers Anthony The Devil's Mode, a collection of short stories by Anthony Burgess Edna Mode, a fictional character in Pixar's animated superhero film The Incredibles Places Mode, Banmauk, a village in Burma Mode, Illinois, an unincorporated community in Shelby County, Illinois, United States Other uses Amateur radio modes IL Mode, a former name of Bærum SK, a Norwegian association football club Mode of transport, a means of transportation A technocomplex of stone tools Mode of production, a | meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Language Grammatical mode or grammatical mood, a category of verbal inflections that expresses an attitude of mind Imperative mood Subjunctive mood Rhetorical modes, a category of discourse Narrative mode, the type of method voice and point of view used to convey a narrative Modes of persuasion, oratorical devices Mode (literature), the general category of a literary work, e.g. the pastoral mode Music Mode (music), a system of musical tonality involving a type of scale coupled with a set of characteristic melodic behaviors Modus (medieval music) Gregorian mode, a system of modes used in Gregorian chant (as opposed to ancient Greek modes or Byzantine octoechos) "Mode", a song by PRhyme from the 2015 soundtrack Southpaw: Music from and Inspired by the Motion Picture Mathematics Mode (statistics), the most common value among a group Modes of convergence, a property of a series Science Normal mode, patterns of vibration in acoustics, electromagnetic theory, etc. Longitudinal mode Transverse mode Global mode Mode (electromagnetism) Hybrid mode, such as longitudinal-section mode Quasinormal mode, a type of energy dissipation of a perturbed object or field Starvation mode, a biological condition Computation Mode (user interface), distinct method of operation within a computer system, in which the same user input can produce different results depending on the state of the system A game mode, a |
the architectural design of a burrow is a genetic trait. Types of animals known as mice The most common mice are murines, in the same clade as common rats. They are murids, along with gerbils and other close relatives. order Dasyuromorphia marsupial mice, smaller species of Dasyuridae order Rodentia suborder Castorimorpha family Heteromyidae Kangaroo mouse, genus Microdipodops Pocket mouse, tribe Perognathinae Spiny pocket mouse, genus Heteromys suborder Anomaluromorpha family Anomaluridae flying mouse suborder Myomorpha family Cricetidae Brush mouse, Peromyscus boylii Florida mouse Golden mouse American harvest mouse, genus Reithrodontomys family Muridae typical mice, the genus Mus Field mice, genus Apodemus Wood mouse, Apodemus sylvaticus Yellow-necked mouse, Apodemus flavicollis Large Mindoro forest mouse Big-eared hopping mouse Luzon montane forest mouse Forrest's mouse Pebble-mound mouse Bolam's mouse Eurasian harvest mouse, genus Micromys Emotions Researchers at the Max Planck Institute of Neurobiology have confirmed that mice have a range of facial expressions. They used machine vision to spot familiar human emotions like pleasure, disgust, nausea, pain, and fear. Diet In nature, mice are largely herbivores, consuming any kind of fruit or grain from plants. However, mice adapt well to urban areas and are known for eating almost all types of food scraps. In captivity, mice are commonly fed commercial pelleted mouse diet. These diets are nutritionally complete, but they still need a large variety of vegetables. Mice do not have a special appetite for cheese. They will only eat cheese for lack of better options. Human use As experimental animals Mice are common experimental animals in laboratory research of biology and psychology fields primarily because they are mammals, and also because they share a high degree of homology with humans. They are the most commonly used mammalian model organism, more common than rats. The mouse genome has been sequenced, and virtually all mouse genes have human homologs. The mouse has approximately 2.7 billion base pairs and 20 pairs of chromosomes. They can also be manipulated in ways that are illegal with humans, although animal rights activists often object. A knockout mouse is a genetically modified mouse that has had one or more of its genes made inoperable through a gene knockout. Reasons for common selection of mice are that they are small and inexpensive, have a widely varied diet, are easily maintained, and can reproduce quickly. Several generations of mice can be observed in a relatively short time. Mice are generally very docile if raised from birth and given sufficient human contact. However, certain strains have been known to be quite temperamental. As pets Many people buy mice as companion pets. They can be playful, loving and can grow used to being handled. Like pet rats, pet mice should not be left unsupervised outside as they have many natural predators, including (but not limited to) birds, snakes, lizards, cats, and dogs. Male mice tend to have a stronger odor than the females. However, mice are careful groomers and as pets they never need bathing. Well looked-after mice can make ideal pets. Some common mouse care products are: Cage – Usually a hamster or gerbil cage, but a variety of special mouse cages are now available. Most should have a secure door. Food – Special pelleted and seed-based food is available. | trait. Types of animals known as mice The most common mice are murines, in the same clade as common rats. They are murids, along with gerbils and other close relatives. order Dasyuromorphia marsupial mice, smaller species of Dasyuridae order Rodentia suborder Castorimorpha family Heteromyidae Kangaroo mouse, genus Microdipodops Pocket mouse, tribe Perognathinae Spiny pocket mouse, genus Heteromys suborder Anomaluromorpha family Anomaluridae flying mouse suborder Myomorpha family Cricetidae Brush mouse, Peromyscus boylii Florida mouse Golden mouse American harvest mouse, genus Reithrodontomys family Muridae typical mice, the genus Mus Field mice, genus Apodemus Wood mouse, Apodemus sylvaticus Yellow-necked mouse, Apodemus flavicollis Large Mindoro forest mouse Big-eared hopping mouse Luzon montane forest mouse Forrest's mouse Pebble-mound mouse Bolam's mouse Eurasian harvest mouse, genus Micromys Emotions Researchers at the Max Planck Institute of Neurobiology have confirmed that mice have a range of facial expressions. They used machine vision to spot familiar human emotions like pleasure, disgust, nausea, pain, and fear. Diet In nature, mice are largely herbivores, consuming any kind of fruit or grain from plants. However, mice adapt well to urban areas and are known for eating almost all types of food scraps. In captivity, mice are commonly fed commercial pelleted mouse diet. These diets are nutritionally complete, but they still need a large variety of vegetables. Mice do not have a special appetite for cheese. They will only eat cheese for lack of better options. Human use As experimental animals Mice are common experimental animals in laboratory research of biology and psychology fields primarily because they are mammals, and also because they share a high degree of homology with humans. They are the most commonly used mammalian model organism, more common than rats. The mouse genome has been sequenced, and virtually all mouse genes have human homologs. The mouse has approximately 2.7 billion base pairs and 20 pairs of chromosomes. They can also be manipulated in ways that are illegal with humans, although animal rights activists often object. A knockout mouse is a genetically modified mouse that has had one or more of its genes made inoperable through a gene knockout. Reasons for common selection of mice are that they are small and inexpensive, have a widely varied diet, are easily maintained, and can reproduce quickly. Several generations of mice can be observed in a relatively short time. Mice are generally very docile if raised from birth and given sufficient human contact. However, certain strains have been known |
then the components of the second system were added back to the main user system, without ever having shut it down. Multics supported multiple CPUs; it was one of the earliest multiprocessor systems. Multics was the first major operating system to be designed as a secure system from the outset. Despite this, early versions of Multics were broken into repeatedly. This led to further work that made the system much more secure and prefigured modern security engineering techniques. Break-ins became very rare once the second-generation hardware base was adopted; it had hardware support for ring-oriented security, a multilevel refinement of the concept of master mode. A US Air Force tiger team project tested Multics security in 1973 under the codeword ZARF. On 28 May 1997, the American National Security Agency declassified this use of the codeword ZARF. Multics was the first operating system to provide a hierarchical file system, and file names could be of almost arbitrary length and syntax. A given file or directory could have multiple names (typically a long and short form), and symbolic links between directories were also supported. Multics was the first to use the now-standard concept of per-process stacks in the kernel, with a separate stack for each security ring. It was also the first to have a command processor implemented as ordinary user code – an idea later used in the Unix shell. It was also one of the first written in a high-level language (Multics PL/I), after the Burroughs MCP system written in ALGOL. The deployment of Multics into secure computing environments also spurred the development of innovative supporting applications. In 1975, Morrie Gasser of MITRE Corporation developed a pronounceable random word generator to address password requirements of installations such as the Air Force Data Services Center (AFDSC) processing classified information. To avoid guessable passwords, the AFDSC decided to assign passwords but concluded the manual assignment required too much administrative overhead. Thus, a random word generator was researched and then developed in PL1. Instead of being based on phonemes, the system employed phonemic segments (second order approximations of English) and other rules to enhance pronounceability and randomness, which was statistically modeled against other approaches. A descendant of this generator was added to Multics during Project Guardian. Project history In 1964, Multics was developed initially for the GE-645 mainframe, a 36-bit system. GE's computer business, including Multics, was taken over by Honeywell in 1970; around 1973, Multics was supported on the Honeywell 6180 machines, which included security improvements including hardware support for protection rings. Bell Labs pulled out of the project in 1969; some of the people who had worked on it there went on to create the Unix system. Multics development continued at MIT and General Electric. Honeywell continued system development until 1985. About 80 multimillion-dollar sites were installed, at universities, industry, and government sites. The French university system had several installations in the early 1980s. After Honeywell stopped supporting Multics, users migrated to other systems like Unix. In 1985, Multics was issued certification as a B2 level secure operating system using the Trusted Computer System Evaluation Criteria from the National Computer Security Center (NCSC) a division of the NSA, the first operating system evaluated to this level. Multics was distributed from 1975 to 2000 by Groupe Bull in Europe, and by Bull HN Information Systems Inc. in the United States. In 2006, Bull SAS released the source code of Multics versions MR10.2, MR11.0, MR12.0, MR12.1, MR12.2, MR12.3, MR12.4 & MR12.5 under a free software license. The last known Multics installation running natively on Honeywell hardware was shut down on October 30, 2000, at the Canadian Department of National Defence in Halifax, Nova Scotia, Canada. Current status In 2006 Bull HN released the source code for MR12.5, the final 1992 Multics release, to MIT. Most of the system is now available as free software with the exception of some optional pieces such as TCP/IP. In 2014 Multics was successfully run on current hardware using an emulator. The 1.0 release of the emulator is now available. Release 12.6f of Multics accompanies the 1.0 release of the emulator, and adds a few new features, including command line recall and editing using the video system. Commands The following is a list of programs and commands for common computing tasks that are supported by the Multics command-line interface. apl ceil change_wdir (cwd) cobol copy (cp) echo emacs floor fortran (ft) gcos (gc) help home_dir (hd) if list (ls) login (l) logout ltrim mail (ml) pascal pl1 print (pr) print_wdir (pwd) runoff (rf) rtrim sort teco trunc where (wh) who working_dir (wd) Retrospective observations Peter H. Salus, author of a book covering Unix's early years, stated one position: "With Multics they tried to have a much more versatile and flexible operating system, and it failed miserably". This position, however, has been widely discredited in the computing community because many of Multics' technical innovations are used in modern commercial computing systems. The permanently resident kernel of Multics, a system derided in its day as being too large and complex, was only 135 KB of code. In comparison, a Linux system in 2007 might have occupied 18 MB. The first MIT GE-645 had 512 kilowords of memory (2 MiB), a truly enormous amount at the time, and the kernel used only a moderate portion of Multics main memory. The entire system, including the operating system and the complex PL/1 compiler, user commands, and subroutine libraries, consisted of about 1500 source modules. These averaged roughly 200 lines of source code each, and compiled to produce a total of roughly 4.5 MiB of procedure code, which was fairly large by the standards of the day. Multics compilers generally optimised more for code density than CPU performance, for example using small sub-routines called operators for short standard code sequences, which makes comparison of object code size with modern systems less useful. High code density was a good optimisation choice for Multics as a multi-user system with expensive main memory. During its commercial product history, it was often commented internally that the Honeywell Information Systems (HIS) (later | control lists on every file provide flexible information sharing, but complete privacy when needed. Multics has a number of standard mechanisms to allow engineers to analyze the performance of the system, as well as a number of adaptive performance optimization mechanisms. Novel ideas Multics implements a single-level store for data access, discarding the clear distinction between files (called segments in Multics) and process memory. The memory of a process consists solely of segments that were mapped into its address space. To read or write to them, the process simply uses normal central processing unit (CPU) instructions, and the operating system takes care of making sure that all the modifications were saved to disk. In POSIX terminology, it is as if every file were mmap()ed; however, in Multics there is no concept of process memory, separate from the memory used to hold mapped-in files, as Unix has. All memory in the system is part of some segment, which appears in the file system; this includes the temporary scratch memory of the process, its kernel stack, etc. One disadvantage of this was that the size of segments was limited to 256 kilowords, just over 1 MB. This was due to the particular hardware architecture of the machines on which Multics ran, having a 36-bit word size and index registers (used to address within segments) of half that size (18 bits). Extra code had to be used to work on files larger than this, called multisegment files. In the days when one megabyte of memory was prohibitively expensive, and before large databases and later huge bitmap graphics, this limit was rarely encountered. Another major new idea of Multics was dynamic linking, in which a running process could request that other segments be added to its address space, segments which could contain code that it could then execute. This allowed applications to automatically use the latest version of any external routine they called, since those routines were kept in other segments, which were dynamically linked only when a process first tried to begin execution in them. Since different processes could use different search rules, different users could end up using different versions of external routines automatically. Equally importantly, with the appropriate settings on the Multics security facilities, the code in the other segment could then gain access to data structures maintained in a different process. Thus, to interact with an application running in part as a daemon (in another process), a user's process simply performed a normal procedure-call instruction to a code segment to which it had dynamically linked (a code segment that implemented some operation associated with the daemon). The code in that segment could then modify data maintained and used in the daemon. When the action necessary to commence the request was completed, a simple procedure return instruction returned control of the user's process to the user's code. Multics also supported extremely aggressive on-line reconfiguration: central processing units, memory banks, disk drives, etc. could be added and removed while the system continued operating. At the MIT system, where most early software development was done, it was common practice to split the multiprocessor system into two separate systems during off-hours by incrementally removing enough components to form a second working system, leaving the rest still running the original logged-in users. System software development testing could be done on the second system, then the components of the second system were added back to the main user system, without ever having shut it down. Multics supported multiple CPUs; it was one of the earliest multiprocessor systems. Multics was the first major operating system to be designed as a secure system from the outset. Despite this, early versions of Multics were broken into repeatedly. This led to further work that made the system much more secure and prefigured modern security engineering techniques. Break-ins became very rare once the second-generation hardware base was adopted; it had hardware support for ring-oriented security, a multilevel refinement of the concept of master mode. A US Air Force tiger team project tested Multics security in 1973 under the codeword ZARF. On 28 May 1997, the American National Security Agency declassified this use of the codeword ZARF. Multics was the first operating system to provide a hierarchical file system, and file names could be of almost arbitrary length and syntax. A given file or directory could have multiple names (typically a long and short form), and symbolic links between directories were also supported. Multics was the first to use the now-standard concept of per-process stacks in the kernel, with a separate stack for each security ring. It was also the first to have a command processor implemented as ordinary user code – an idea later used in the Unix shell. It was also one of the first written in a high-level language (Multics PL/I), after the Burroughs MCP system written in ALGOL. The deployment of Multics into secure computing environments also spurred the development of innovative supporting applications. In 1975, Morrie Gasser of MITRE Corporation developed a pronounceable random word generator to address password requirements of installations such as the Air Force Data Services Center (AFDSC) processing classified information. To avoid guessable passwords, the AFDSC decided to assign passwords but concluded the manual assignment required too much administrative overhead. Thus, a random word generator was researched and then developed in PL1. Instead of being based on phonemes, the system employed phonemic segments (second order approximations of English) and other rules to enhance pronounceability and randomness, which was statistically modeled against other approaches. A descendant of this generator was added to Multics during Project Guardian. Project history In 1964, Multics was developed initially for the GE-645 mainframe, a 36-bit system. GE's computer business, including Multics, was taken over by Honeywell in 1970; around 1973, Multics was supported on the Honeywell 6180 machines, which included security improvements including hardware support for protection rings. Bell Labs pulled out of the project in 1969; some of the people who had worked on it there went on to create the Unix system. Multics development continued at MIT and General Electric. Honeywell continued system development until 1985. About 80 multimillion-dollar sites were installed, at universities, industry, and government sites. The French university system had several installations in the early 1980s. After Honeywell stopped supporting Multics, users migrated to other systems like Unix. In 1985, Multics was issued certification as a B2 level secure operating system using the Trusted Computer System Evaluation Criteria from the National Computer Security Center (NCSC) a division of the NSA, the first operating system evaluated to this level. Multics was distributed from 1975 to 2000 by Groupe Bull in Europe, and by Bull HN Information Systems Inc. in the United States. In 2006, Bull SAS released the source code of Multics versions MR10.2, MR11.0, MR12.0, MR12.1, MR12.2, MR12.3, MR12.4 & MR12.5 under a free software license. The last known Multics installation running natively on Honeywell hardware was shut down on October 30, 2000, at the Canadian Department of National Defence in Halifax, Nova Scotia, Canada. Current status In 2006 Bull HN released the source code for MR12.5, the final 1992 Multics release, to MIT. Most of the system is now available as free software with the exception of some optional pieces such as TCP/IP. In 2014 Multics was successfully run on current hardware using an emulator. The 1.0 release of the emulator is now available. Release 12.6f of Multics accompanies the 1.0 release of the emulator, and adds a few new features, including command line recall and editing using the video system. Commands The following is a list of programs and commands for common computing tasks that are supported by the Multics command-line interface. apl ceil change_wdir (cwd) cobol copy (cp) echo emacs floor fortran (ft) gcos (gc) help home_dir (hd) if list (ls) login (l) logout ltrim mail (ml) pascal pl1 print (pr) print_wdir (pwd) runoff (rf) rtrim sort teco trunc where (wh) who working_dir (wd) Retrospective observations Peter H. Salus, author of a book covering Unix's early years, stated one position: "With Multics they tried to have a much more versatile and flexible operating system, and it failed miserably". This position, however, has been widely discredited in the computing community because many of Multics' technical innovations are used in modern commercial computing systems. The permanently resident kernel of Multics, a system derided in its day as being too large and complex, was only 135 KB of code. In comparison, a Linux system in 2007 might have occupied 18 MB. The first MIT GE-645 had 512 kilowords of memory (2 MiB), a truly enormous amount at the time, and the kernel used only a moderate portion of Multics main memory. The entire system, including the operating system and the complex PL/1 compiler, user commands, and subroutine libraries, consisted of about 1500 source modules. These averaged roughly 200 lines of source code each, and compiled to produce a total of roughly 4.5 MiB of |
choice of subject matter as well as subversive parody to heighten class consciousness and promote Marxist ideas. Situationist film maker Guy Debord, author of The Society of the Spectacle, began his film In girum imus nocte et consumimur igni with a radical critique of the spectator who goes to the cinema to forget about their dispossessed daily life. Situationist film makers produced a number of important films, where the only contribution by the situationist film cooperative was the sound-track. In Can dialectics break bricks? (1973), a Chinese Kung Fu film was transformed by redubbing into | girum imus nocte et consumimur igni with a radical critique of the spectator who goes to the cinema to forget about their dispossessed daily life. Situationist film makers produced a number of important films, where the only contribution by the situationist film cooperative was the sound-track. In Can dialectics break bricks? (1973), a Chinese Kung Fu film was transformed by redubbing into an epistle on state capitalism and Proletarian revolution. The intellectual technique of using capitalism's own structures against itself is known as détournement. Marxist film theory has developed from these precise and historical beginnings and is now sometimes viewed in a wider way to refer to any power relationships or structures within a moving image text. See also Karl Marx in |
France United States Monterey Accelerated Research System, a cabled-based ocean observatory in Monterey Bay, California Mars, California, a populated place Mars Bluff, South Carolina, an unincorporated community Outingdale, California, formerly called Mars, a populated place Le Mars, Iowa, a city in and the county seat of Plymouth County Mars, Nebraska, a ghost town Mars, Pennsylvania, a borough Mars, Texas, a ghost town Ukraine Mars, Chernihiv Oblast, a village on North of Ukraine Media, music and arts Fictional entities Mars (Black Clover), a character in Black Clover Mars (Doctor Who), a planet in the Doctor Who fictional universe Mars (Biker Mice from Mars), the planet as it appears in Biker Mice from Mars Military Armament Research Syndicate, a fictional organization in the G.I. Joe universe Commander Mars, a Pokémon character Mars or Scarface, an Ultimate Muscle character The Megaversity Association for Reenactments and Simulations, a fictional association in The Big U Mars the Dog, canine star of A Dog's Breakfast Film and television Mars (1930 film), an animated short film in the Oswald the Lucky Rabbit series Mars (1968 film), a soviet science education/fiction film Mars (1997 film), a film starring Shari Belafonte Mars (1998 film), a film starring Olivier Gruner Mars (2004 film), a Russian film set in Mars, a small town on the Black Sea Mars (2010 film), a 2010 animated film Mars (American TV series), a 2016 docudrama science fiction series Mars (Taiwanese TV series), a 2004 drama series based on the manga by Fuyumi Soryo Mars (talk show), a female talk show on GMA News TV Literature Mars (Fritz Zorn), a 1976 autobiographical essay by Fritz Angst Mars (comics), a comic book series Mars (manga), a 1996 manga series by Fuyumi Soryo Mars trilogy, three science fiction novels by Kim Stanley Robinson Mars, a novel by Ben Bova in the Grand Tour series Mars, 1976 manga series by Mitsuteru Yokoyama The Mars Project, a non-fiction science book by Wernher von Braun Project Mars: A Technical Tale, a science fiction novel by Wernher von Braun Music albums Mars (B'z album) Mars (Gackt album) Mars, disk two of the Red Hot Chili Peppers double album Stadium Arcadium Mars, an album by Sinkane Music groups Mars Music, a now defunct U.S. music store chain Mars (band), a No Wave band M.A.R.S., a heavy metal supergroup that released the 1986 album Project Driver MARRS, British electronic music group Songs and movements "Mars" (song), a 2008 single by Fake Blood "Mars, the Bringer of War", a movement in Holst's The Planets "Mars", a song by Jay Sean from Neon "Mars", a song by Mario from Closer to Mars "Mars", a song by Soulfly from Prophecy Video gaming Project Mars, codename for the Sega 32X add-on video game console Memory Array Redcode Simulator, the environment for the competitive programming game Core War Military MARS (missile), air to ground missile built by Israel Military Industries Mars Automatic Pistol, a semi-automatic pistol developed in 1900 MARS tanker, a programme to buy new tanker ships for the Royal Fleet Auxiliary ITL MARS, a reflex sight made by International Technologies Lasers Operation Mars, codename of the Second Rzhev-Sychevka Offensive, a Soviet offensive during World War II Military Auxiliary Radio System, an auxiliary communications system of amateur radio operators for the United States armed forces or M270 Multiple Launch | Closer to Mars "Mars", a song by Soulfly from Prophecy Video gaming Project Mars, codename for the Sega 32X add-on video game console Memory Array Redcode Simulator, the environment for the competitive programming game Core War Military MARS (missile), air to ground missile built by Israel Military Industries Mars Automatic Pistol, a semi-automatic pistol developed in 1900 MARS tanker, a programme to buy new tanker ships for the Royal Fleet Auxiliary ITL MARS, a reflex sight made by International Technologies Lasers Operation Mars, codename of the Second Rzhev-Sychevka Offensive, a Soviet offensive during World War II Military Auxiliary Radio System, an auxiliary communications system of amateur radio operators for the United States armed forces or M270 Multiple Launch Rocket System Operation Mars, 28 March 1918 German offensive in World War I, part of the Spring Offensive Warships Dutch frigate Mars, later HMS Mars, a 32-gun fifth rate ship of the line built in 1769 French privateer Mars (1746), later HMS Mars, a 64-gun third-rate HMS Mars (1759), a 74-gun third rate HMS Mars (1794), a 74-gun third rate HMS Mars (1848), an 80-gun second rate HMS Mars (1896), a Majestic-class battleship HMS Mars (R76), a Colossus-class aircraft carrier renamed HMS Pioneer in 1944 SMS Mars (1879), a German gunnery training ship SMS Tegetthoff (1878) or SMS Mars, an Austro-Hungarian central battery ship Swedish warship Mars, a ship sunk in 1564 USS Mars (1798), a galley USS Mars (AC-6), launched in 1909 USS Mars (AFS-1), launched in 1963 USS Mars, several ships of the US Navy Mars, a planned Minotaur-class cruiser of the Royal Navy, cancelled in 1946 French ship Mars, a list of French warships HMS Mars, a list of ships of the Royal Navy SMS Mars, a list of ships Organizations and products Mars (beer), a type of lambic ale Mars (motorcycle), a defunct German motorcycle manufacturer Mars (oil platform), an oil drilling platform in the Gulf of Mexico Mars (supermarket), a U.S. grocery chain Icaro Mars, an Italian hang glider design MARS Group, a British architectural think tank founded in 1933 Mauritius Amateur Radio Society Mongolian Amateur Radio Society Mumbai Amateur Radio Society, Mumbai, India Mars Tver, a former name of THK Tver, a minor professional ice hockey club in Tver, Russia People Mars (surname), a list of people with the surname Cheung Wing-fat (born 1954), nicknamed Mars, Hong Kong |
stand alone. For instance, the Latin root reg- (‘king’) must always be suffixed with a case marker: rex (reg-s), reg-is, reg-i, etc. For a language like Latin, a root can be defined as the main lexical morpheme of a word. Example English words have the following morphological analyses. "Unbreakable" is composed of three morphemes: un- (a bound morpheme signifying "not"), -break- (the root, a free morpheme), and -able (a free morpheme signifying "can be done"). The plural morpheme for regular nouns (-s) has three allomorphs: it is pronounced (e.g., in cats ), (e.g., in dishes ), and (e.g., in dogs ), depending on the pronunciation of the root. Classification of morphemes Free and bound morphemes Every morpheme can be classified as either free or bound. Free morphemes can function independently as words (e.g. town, dog) and can appear within lexemes (e.g. town hall, doghouse). Bound morphemes appear only as parts of words, always in conjunction with a root and sometimes with other bound morphemes. For example, un- appears only accompanied by other morphemes to form a word. Most bound morphemes in English are affixes, specifically prefixes and suffixes. Examples of suffixes are -tion, -sion, -tive, -ation, -ible, and -ing. Bound morphemes that are not affixed are called cranberry morphemes. Classification of bound morphemes Bound morphemes can be further classified as derivational or inflectional morphemes. The main difference between them is their function in relation to words. Derivational bound morphemes Derivational morphemes, when combined with a root, change the semantic meaning or the part of speech of the affected word. For example, in the word happiness, the addition of the bound morpheme -ness to the root happy changes the word from an adjective (happy) to a noun (happiness). In the word unkind, un- functions as a derivational morpheme since it inverts the meaning of the root morpheme (word) kind. Generally, morphemes that affix (i.e., affixes) to a root morpheme (word) are bound morphemes. Inflectional bound morphemes Inflectional morphemes modify the tense, aspect, mood, person, or number of a verb, or the number, gender, or case of a noun, adjective, or pronoun, without affecting the word's meaning or class (part of speech). Examples of applying inflectional morphemes to words are adding -s to the root dog to form dogs, or adding -ed to wait to form waited. An inflectional morpheme changes the form of a word. English has eight inflections. Allomorphs Allomorphs are variants of a morpheme that differ in pronunciation but are semantically identical. For example, the English plural marker -(e)s of regular nouns can be pronounced (bats), , (bugs), or , (buses), depending on the final sound of the noun's plural form. Zero-Bound-Morpheme Zero-Morpheme A zero-morpheme is a type of morpheme that carries semantic meaning but is not represented by auditory phonemes. They are often represented by /Ø/ within glosses. Generally, these types of morphemes have no visible changes. For instance, sheep is both the singular and the plural form. The intended meaning is thus derived from the Co-occurrence determiner (in this case, "some-" or "a-"). Content vs. function Content morphemes express a concrete meaning or content, and function morphemes have more of a grammatical role. For example, the morphemes fast and sad can be considered content morphemes. On the other hand, the suffix -ed is a function morpheme since it has the grammatical function of indicating past tense. Both categories may seem very clear and intuitive, but the idea behind them is occasionally harder to grasp since they overlap with each other. Examples of ambiguous situations are the preposition over and the determiner your, which seem to have concrete meanings but are considered function morphemes since their role is to connect ideas grammatically. Here is a general rule to determine the category of a morpheme: Content morphemes include free morphemes that are nouns, adverbs, adjectives, and verbs, and include bound morphemes that are bound roots and derivational affixes. Function morphemes may be free morphemes that are prepositions, pronouns, determiners, and conjunctions. Sometimes, they are bound morphemes that are inflectional affixes. Other features Roots are composed of only one morpheme, while stems can be composed of more than one morpheme. Any additional affixes are considered morphemes. For example, in the word quirkiness, the root is quirk, but the stem is quirky, which has two morphemes. Moreover, some pairs of affixes have identical phonological form but different meanings. For example, the suffix -er can be either derivational (e.g. sell ⇒ seller) or inflectional (e.g. small ⇒ smaller). Such morphemes are called homophonous. Some words might seem to be composed of multiple morphemes but are not. Therefore, not only form but also meaning must be considered when identifying morphemes. For example, the word relate might seem to be composed of two morphemes, re- | from an adjective (happy) to a noun (happiness). In the word unkind, un- functions as a derivational morpheme since it inverts the meaning of the root morpheme (word) kind. Generally, morphemes that affix (i.e., affixes) to a root morpheme (word) are bound morphemes. Inflectional bound morphemes Inflectional morphemes modify the tense, aspect, mood, person, or number of a verb, or the number, gender, or case of a noun, adjective, or pronoun, without affecting the word's meaning or class (part of speech). Examples of applying inflectional morphemes to words are adding -s to the root dog to form dogs, or adding -ed to wait to form waited. An inflectional morpheme changes the form of a word. English has eight inflections. Allomorphs Allomorphs are variants of a morpheme that differ in pronunciation but are semantically identical. For example, the English plural marker -(e)s of regular nouns can be pronounced (bats), , (bugs), or , (buses), depending on the final sound of the noun's plural form. Zero-Bound-Morpheme Zero-Morpheme A zero-morpheme is a type of morpheme that carries semantic meaning but is not represented by auditory phonemes. They are often represented by /Ø/ within glosses. Generally, these types of morphemes have no visible changes. For instance, sheep is both the singular and the plural form. The intended meaning is thus derived from the Co-occurrence determiner (in this case, "some-" or "a-"). Content vs. function Content morphemes express a concrete meaning or content, and function morphemes have more of a grammatical role. For example, the morphemes fast and sad can be considered content morphemes. On the other hand, the suffix -ed is a function morpheme since it has the grammatical function of indicating past tense. Both categories may seem very clear and intuitive, but the idea behind them is occasionally harder to grasp since they overlap with each other. Examples of ambiguous situations are the preposition over and the determiner your, which seem to have concrete meanings but are considered function morphemes since their role is to connect ideas grammatically. Here is a general rule to determine the category of a morpheme: Content morphemes include free morphemes that are nouns, adverbs, adjectives, and verbs, and include bound morphemes that are bound roots and derivational affixes. Function morphemes may be free morphemes that are prepositions, pronouns, determiners, and conjunctions. Sometimes, they are bound morphemes that are inflectional affixes. Other features Roots are composed of only one morpheme, while stems can be composed of more than one morpheme. Any additional affixes are considered morphemes. For example, in the word quirkiness, the root is quirk, but the stem is quirky, which has two morphemes. Moreover, some pairs of affixes have identical phonological form but different meanings. For example, the suffix -er can |
Nina Blackwood moved on to pursue new roles in television. Martha Quinn's contract was not renewed in late 1986 and she departed the network. She was brought back in early 1989 and stayed until 1992. Downtown Julie Brown was hired as the first new VJ as a replacement. In mid-1987, Alan Hunter and Mark Goodman ceased being full-time MTV veejays. Return of the Rock Beginning in late 1997, MTV progressively reduced its airing of rock music videos, leading to the slogan among skeptics, "Rock is dead." Two years later, in the fall of 1999, MTV announced a special Return of the Rock weekend, in which new rock acts received airtime, after which a compilation album was released. By 2000, Linkin Park, Sum 41, Jimmy Eat World, Mudvayne, Cold, At the Drive-In, Alien Ant Farm, and other acts were added to the musical rotation. MTV also launched the subscription channel MTVX to play rock music videos exclusively. Total Request Live In 1997, MTV introduced its new studios in Times Square. MTV created four shows in the late 1990s that centered on music videos: MTV Live, Total Request, Say What?, and 12 Angry Viewers. A year later, in 1998, MTV merged Total Request and MTV Live into a live daily top 10 countdown show, Total Request Live, which became known as TRL. The original host was Carson Daly. The show included a live studio audience and was filmed in a windowed studio that allowed crowds to look in. According to Nielsen, the average audience for the show was at its highest in 1999 and continued with strong numbers through 2001. The program played the top ten pop, rock, R&B, and hip hop music videos, and featured live interviews with artists and celebrities. In 2003, Carson Daly left MTV and TRL to focus on his late night talk show on NBC. The series came to an end with a special finale episode, Total Finale Live, which aired November 16, 2008, and featured hosts and guests that previously appeared on the show. From 1998 to 2003, MTV also aired several other music video programs from its studios. These programs included Say What? Karaoke, a game show hosted by Dave Holmes. In the early 2000s MTV aired VJ for a Day, hosted by Ray Munns. MTV also aired Hot Zone, hosted by Ananda Lewis, which featured pop music videos during the midday time period. Other programs at the time included Sucker Free, and BeatSuite. Milestones and specials Around 1999 through 2001, as MTV aired fewer music videos throughout the day, it regularly aired compilation specials from its then 20-year history to look back on its roots. An all-encompassing special, MTV Uncensored, premiered in 1999 and was later released as a book. Janet Jackson became the inaugural honoree of the "MTV Icon" award, "an annual recognition of artists who have made significant contributions to music, music video and pop culture while tremendously impacting the MTV generation." Subsequent recipients included Aerosmith, Metallica, and the Cure. 1995–2010: Shift from music From 1995 to 2000, MTV played 36.5% fewer music videos. MTV president Van Toffler stated: "Clearly, the novelty of just showing music videos has worn off. It's required us to reinvent ourselves to a contemporary audience." The network launched MTV Radio Network in 1995 with Westwood One. Despite targeted efforts to play certain types of music videos in limited rotation, MTV greatly reduced its overall rotation of music videos by the mid-2000s. While music videos were featured on MTV up to eight hours per day in 2000, the year 2008 saw an average of just three hours of music videos per day on MTV. It's been speculated that the rise of social media and websites like YouTube as an outlet for the promotion and viewing of music videos led to this reduction. During this time, MTV hired Nancy Bennett as Senior VP of creative and content development for MTV Networks Music. As the decade progressed, MTV video blocks would be relegated to the early morning hours. During his acceptance speech at the 2007 MTV Video Music Awards, Justin Timberlake would implore MTV to "play more damn videos!" in response to these changes. Over the next decade, MTV would engage in channel drift, gradually expanding its programming outside of music videos with programming lightly or heavily related to music. MTV became known for its reality programming, some of which followed the lives of musicians; The Osbournes, a reality show based on the everyday life of Black Sabbath frontman Ozzy Osbourne and his family premiered in 2002 and would become one of the network's premiere shows. It also kick-started a musical career for Kelly Osbourne, while Sharon Osbourne went on to host her own self-titled talk show on US television. Production ended on The Osbournes in November 2004. 2007's A Shot at Love with Tila Tequila, chronicling MySpace sensation Tila Tequila's journey to find a companion, was the subject of criticism due to Tequila's bisexuality. MTV would also venture into adult animation, with shows like Celebrity Deathmatch, Undergrads, Clone High, and Daria each becoming cult classics. Prior to Total Request Live ending its run in 2008, MTV was experimenting with its remaining music programming under new formats. MTV first premiered a new music video programming block called FNMTV, and a weekly special event called FNMTV Premieres, hosted from Los Angeles by Pete Wentz of the band Fall Out Boy, which was designed to premiere new music videos and have viewers provide instantaneous feedback. AMTV, an early morning block, debuted in 2009. The block would rebrand as Music Feed in 2013 with a reduced schedule and, unlike FNMTV, featured many full-length music videos, news updates, interviews, and performances. MTV would continue to air music programming over the next decade, with the return of MTV Unplugged in 2009, the debut of 10 on Top in May 2010, and Hip Hop POV on April 12, 2012. 2009 saw the debut of Jersey Shore, which became a ratings success throughout its run and spawned the "MTV Shores" franchise, but would attract various controversies. With backlash towards what some consider too much superficial content on the network, a 2009 New York Times article also revealed plans to shift MTV's focus towards more socially conscious media, which the article labels "MTV for the Obama era." Shortly after Michael Jackson died on June 25, the channel aired several hours of Jackson's music videos, accompanied by live news specials featuring reactions from MTV personalities and other celebrities. The temporary shift in MTV's programming culminated the following week with the channel's live coverage of Jackson's memorial service. MTV aired similar one-hour live specials with music videos and news updates following the death of Whitney Houston on February 11, 2012, and the death of Adam Yauch of the Beastie Boys on May 4, 2012. 2010–present: Retirement from music videos In 2010, MTV would drop the "Music Television" branding. The network would still air video premieres on occasion, through both television and real-time interaction with artists and celebrities on its website. Throughout the decade, music programming on the network would be scaled back. In April 2016, then-appointed MTV president Sean Atkins announced plans to restore music programming to the channel. On April 21, 2016, MTV announced that new Unplugged episodes will begin airing, as well as a new weekly performance series called Wonderland. On that same day, immediately after the death of Prince, MTV interrupted its usual programming to air Prince's music videos. In July 2017, it was announced that TRL would be returning to the network on October 2, 2017. Throughout the 2010s, it was observed that MTV's daily schedule came to predominantly consist of film broadcasts and frequent marathons of select original programming (such as Ridiculousness), with criticism from many about the frequency of Ridiculousness marathons. Alongside its unscripted slate, MTV would produce more scripted programming. Such shows included Awkward., an American version of Skins, and a reimagining of Teen Wolf. In June 2012, the network announced the development of a television series based on the Scream franchise. As MTV would pivot back to unscripted programming towards the end of the decade, some of these shows would be moved to other networks. Chris McCarthy was named president of MTV in 2016. In 2021, McCarthy was named president and CEO of MTV Entertainment Group (which also oversees Comedy Central, Paramount Network, TV Land, CMT, and Smithsonian Channel). Programming As MTV expanded, music videos and VJ-guided programming were no longer the centerpiece of its programming. The channel's programming has covered a wide variety of genres and formats aimed at adolescents and young adults. In addition to its original programming, MTV has also aired original and syndicated programs from Paramount-owned siblings and third-party networks. MTV is also a producer of films aimed at young adults through its production label, MTV Films, and has aired both its own theatrically released films and original made-for-television movies from MTV Studios in addition to acquired films. In 2010, a study by the Gay and Lesbian Alliance Against Defamation found that of 207.5 hours of prime time programming on MTV, 42% included content reflecting the lives of gay, bisexual and transgender people. This was the highest in the industry and the highest percentage ever. In 2018, MTV launched a new production unit under the "MTV Studios" name focused on producing new versions of MTV's library shows. This was later renamed MTV Entertainment Studios. Logo and branding MTV's now-iconic logo was designed in 1981 by Manhattan Design (a collective formed by Frank Olinsky, Pat Gorman and Patty Rogoff) under the guidance of original creative director Fred Seibert. The block letter "M" was sketched by Rogoff, with the scribbled word "TV" spraypainted by Olinksky. The primary variant of MTV's logo at the time had the "M" in yellow and the "TV" in red. But unlike most television networks' logos at the time, the logo was constantly branded with different colors, patterns and images on a variety of station IDs. Examples include 1988's ID "Adam And Eve", where the "M" is an apple and the snake is the "TV". And for 1984's ID "Art History", the logo is shown in different art styles. The only constant aspects of MTV's logo at the time were its general shape and proportions, with everything else being dynamic. MTV launched on August 1, 1981, with an extended network ID featuring the first landing on the moon (with still images acquired directly from NASA), which was a concept of Seibert's executed by Buzz Potamkin and Perpetual Motion Pictures. The ID then cut to the American flag planted on the moon's surface changed to show the MTV logo on it, which rapidly changed into different colors and patterns several times per second as the network's original guitar-driven jingle was played for the first time. After MTV's launch, the "moon landing" ID was edited to show only its ending, and was shown at the top of every hour until early 1986, when the ID was scrapped in light of the Space Shuttle Challenger disaster. The ID ran "more than 75,000 times each year (48 times each day), at the top and bottom of every hour every day" according to Seibert. From the late 1990s to the early 2000s, MTV updated its on-air appearance at the beginning of every year and each summer, creating a consistent brand across all of its music-related shows. This style of channel-wide branding came to an end as MTV drastically reduced its number of music-related shows in the early to mid 2000s. Around this time, MTV introduced a static and single color digital on-screen graphic to be shown during all of its programming. Starting with the premiere of the short-lived program FNMTV: Friday Night MTV in 2008, MTV started using a updated and cropped version of its original logo for the 30 years during most of its on-air programming. It became MTV's official logo on February 8, 2010, and officially debuted on its website. The channel's full text "MUSIC TELEVISION" was eliminated, with the revised and chopped down on the logo largely the same as the original logo, but without the initialism, the bottom of the "M" being cropped and the "V" in "TV" no longer branching off. This change was most likely made to reflect MTV's more prominent focus on reality and comedy programming and less on music-related programming. However, much like the original logo, the new logo was designed to be filled in with a seemingly unlimited variety of images. It is used worldwide, but not everywhere existentially. The new logo was first used on MTV Films logo with the 2010 film Jackass 3D. MTV's rebranding was overseen by Popkern. On June 25, 2015, MTV International rebranded its on-air look with a new vaporwave and seapunk-inspired graphics package. It included a series of new station IDs featuring 3D renderings of objects and people, much akin to vaporwave and seapunk "aesthetics". Many have derided MTV's choice of rebranding, insisting that the artistic style was centered on denouncing corporate capitalism (many aesthetic pieces heavily incorporate corporate logos of the 1970s, 80s and 90s, which coincidentally include MTV's original logo) rather than being embraced by major corporations like MTV. Many have also suggested that MTV made an attempt to be relevant in the modern entertainment world with the rebrand. In addition to this, the rebrand was made on exactly the same day that the social media site Tumblr introduced Tumblr TV, an animated GIF viewer which featured branding inspired by MTV's original 1980s on-air look. Tumblr has been cited as a prominent location of aesthetic art, and thus many have suggested MTV and Tumblr "switched identities". The rebrand also incorporated a modified version of MTV's classic "I Want My MTV!" slogan, changed to read "I Am My MTV". Vice has suggested that the slogan change represents "the current generation's movement towards self-examination, identity politics and apparent narcissism." MTV also introduced MTV Bump, a website that allows Instagram and Vine users to submit videos to be aired during commercial breaks, as well as MTV Canvas, an online program where users submit custom IDs to also be aired during commercial breaks. On February 5, 2021, MTV began to use a revised logo in tandem with the 2010 version, doing away with the 3D effect inherited from its predecessors (much akin to the current MTV Video Music Awards variant). The new logo's rollout was completed in time for the 2021 MTV Video Music Awards. "I Want My MTV!" The channel's iconic "I want my MTV!" advertising campaign was launched in 1982. It was first developed by George Lois and was based on a cereal commercial from the 1950s with the slogan "I Want My Maypo!" that Lois adapted unsuccessfully from the original created by animator John Hubley. Lois's first pitch to the network was roundly rejected when Lois insisted that rock stars like Mick Jagger should be crying when they said the tag line, not unlike his failed 'Maypo' revamp. His associate, and Seibert mentor Dale Pon took over the campaign, strategically and creatively, and was able to get the campaign greenlit when he laughed the tears out of the spots. From then on –with the exception of the closely logos on the first round of commercials– Pon was the primary creative force. All the commercials were produced by Buzz Potamkin and his new company Buzzco Productions, directed first by Thomas Schlamme and Alan Goodman and eventually by Candy Kugel. The campaign featured popular artists and celebrities, including Pete Townshend, Pat Benatar, Adam Ant, David Bowie, the Police, Kiss, Culture Club, Billy Idol, Hall & Oates, Cyndi Lauper, Madonna, Lionel Richie, Ric Ocasek, John Mellencamp, Peter Wolf, Joe Elliott, Stevie Nicks, Rick Springfield, and Mick Jagger, interacting with the MTV logo on-air and encouraging viewers to call their pay television providers and request that MTV be added to their local channel lineups. Eventually, the slogan became so ubiquitous that it made an appearance as a lyric sung by Sting on the Dire Straits song "Money for Nothing", whose music video aired in regular rotation on MTV when it was first released in 1985. Influence and controversies The channel has been a target of criticism by different groups about programming choices, social issues, political correctness, sensitivity, censorship, and a perceived negative social influence on young people. Portions of the content of MTV's programs and productions have come under controversy in the general news media and among social groups that have taken offense. Some within the music industry criticized what they saw as MTV's homogenization of rock 'n' roll, including the punk band the Dead Kennedys, whose song "M.T.V.Get Off the Air" was released on their 1985 album Frankenchrist, just as MTV's influence over the music industry was being solidified. MTV was also the major influence on the growth of music videos during the 1980s. Breaking the "color barrier" During MTV's first few years, very few black artists were featured. The select few in MTV's rotation between 1981 and 1984 were Michael Jackson, Prince, Eddy Grant, Tina Turner, Donna Summer, Joan Armatrading, Musical Youth, The Specials, The Selecter, Grace Jones and Herbie Hancock. Mikey Craig of Culture Club, Joe Leeway of Thompson Twins and Tracy Wormworth of The Waitresses were also black. The Specials, which included black and white vocalists and musicians, were also the first act with people of color to perform on MTV; their song "Rat Race" was the 58th video on the station's first broadcast day. MTV refused other black artists' videos, such as Rick James' "Super Freak", because they did not fit the channel's carefully selected album-oriented rock format at the time. The exclusion enraged James, who publicly advocated the addition of more black artists to the channel. David Bowie also questioned MTV's lack of black artists during an on-air interview with VJ Mark Goodman in 1983. MTV's original head of talent and acquisition, Carolyn B. Baker, who was black, questioned why the definition of music had to be so narrow, as did a few others outside the network. Years later, Baker said, "The party line at MTV was that we weren't playing black music because of the research' – but the research was based on ignorance… We were young, we were cutting-edge. We didn't have to be on the cutting edge of racism." Nevertheless, it was Baker who rejected Rick James' Super Freak video "because there were half-naked women in it, and it was a piece of crap. As a black woman, I did not want that representing my people as the first black video on MTV." The network's director of music programming, Buzz Brindle, told an interviewer in 2006: "MTV was originally designed to be a rock music channel. It was difficult for MTV to find African American artists whose music fit the channel's format that leaned toward rock at the outset." Writers Craig Marks and Rob Tannenbaum noted that the channel "aired videos by plenty of white artists who didn't play rock." Andrew Goodwin later wrote: "[MTV] denied racism, on the grounds that it merely followed the rules of the rock business." MTV senior executive vice president Les Garland complained decades later, "The worst thing was that 'racism' bullshit... there were hardly any videos being made by black artists. Record companies weren't funding them. They never got charged with racism." However, critics of that defence pointed out that record companies were not funding videos for black artists because they knew they would have difficulty persuading MTV to play them. In celebrating the 40th anniversary of the network's launch in 2021, current MTV Entertainment Group president Chris McCarthy acknowledged that "(o)ne of the bigger mistakes in the early years was not playing enough diverse music...but the nice thing that I’ve always learned at MTV is we have no problem owning our mistakes, quickly correcting them and trying to do the right thing and always follow where the audience is going." Before 1983, Michael Jackson also struggled for MTV airtime. To resolve the struggle and finally "break the color barrier", the president of CBS Records, Walter Yetnikoff, denounced MTV in a strong, profane statement, threatening to take away its right to play any of the label's music. However, Les Garland, then acquisitions head, said he decided to air Jackson's "Billie Jean" video without pressure from CBS, a statement later contradicted by CBS head of Business Affairs David Benjamin in Vanity Fair. According to The Austin Chronicle, Jackson's video for the song "Billie Jean" was "the video that broke the color barrier, even though the channel itself was responsible for erecting that barrier in the first place." But change was not immediate. "Billie Jean" was not added to MTV's "medium rotation" playlist (two to three airings per day) until it reached No. 1 on the Billboard Hot 100 chart. In the final week of March, it was in "heavy rotation", one week before the MTV debut of Jackson's "Beat It" video. Prince's "Little Red Corvette" joined both videos in heavy rotation at the end of April. At the beginning of June, "Electric Avenue" by Eddy Grant joined "Billie Jean", which was still in heavy rotation until mid-June. At the end of August, "She Works Hard for the Money" by Donna Summer was in heavy rotation on the channel. Herbie Hancock's "Rockit" and Lionel Richie's "All Night Long" were placed in heavy rotation at the end of October and the beginning of November respectively. In the final week of November, Donna Summer's "Unconditional Love" was in heavy rotation. When Jackson's elaborate video for "Thriller" was released late that year, raising the bar for what a video could be, the network's support for it was total; subsequently, more pop and R&B videos were played on MTV. Following Jackson's and Prince's breakthroughs on MTV, Rick James did several interviews where he brushed off the accomplishment as tokenism, saying in a 1983 interview, in an episode of Mike Judge Presents: Tales from the Tour Bus on James, that "any black artist that [had] their video played on MTV should pull their [videos] off MTV." Subsequent concepts HBO also had a 30-minute program of music videos called Video Jukebox, that first aired around the time of MTV's launch and lasted until late 1986. Also around this time, HBO, as well as other premium channels such as Cinemax, Showtime and The Movie Channel, occasionally played one or a few music videos between movies. SuperStation WTBS launched Night Tracks on June 3, 1983, with up to 14 hours of music video airplay each late night weekend by 1985. Its most noticeable difference was that black artists that MTV initially ignored received airplay. The program ran until the end of May 1992. A few markets also launched music-only channels including Las Vegas' KVMY (channel 21), which debuted in the summer of 1984 as KRLR-TV and branded as "Vusic 21". The first video played on that channel was "Video Killed the Radio Star", following in the footsteps of MTV. Shortly after TBS began Night Tracks, NBC launched a music video program called Friday Night Videos, which was considered network television's answer to MTV. Later renamed simply Friday Night, the program ran from 1983 to 2002. ABC's contribution to the music video program genre in 1984, ABC Rocks, was far less successful, lasting only a year. TBS founder Ted Turner started the Cable Music Channel in 1984, designed to play a broader mix of music videos than MTV's rock format allowed. But after one month as a money-losing venture, Turner sold it to MTV, who redeveloped the channel into VH1. Shortly after its launch, the Disney Channel aired a program called DTV, a play on the MTV acronym. The program used music cuts, both from past and upcoming artists. Instead of music videos, the program used clips of various vintage Disney cartoons and animated films to go with the songs. The program aired in multiple formats, sometimes between shows, sometimes as its own program, and other times as one-off specials. The specials tended to air both on the Disney Channel and NBC. The program aired at several times between 1984 and 1999. In 2009, Disney Channel revived the DTV concept with a new series of short-form | that I’ve always learned at MTV is we have no problem owning our mistakes, quickly correcting them and trying to do the right thing and always follow where the audience is going." Before 1983, Michael Jackson also struggled for MTV airtime. To resolve the struggle and finally "break the color barrier", the president of CBS Records, Walter Yetnikoff, denounced MTV in a strong, profane statement, threatening to take away its right to play any of the label's music. However, Les Garland, then acquisitions head, said he decided to air Jackson's "Billie Jean" video without pressure from CBS, a statement later contradicted by CBS head of Business Affairs David Benjamin in Vanity Fair. According to The Austin Chronicle, Jackson's video for the song "Billie Jean" was "the video that broke the color barrier, even though the channel itself was responsible for erecting that barrier in the first place." But change was not immediate. "Billie Jean" was not added to MTV's "medium rotation" playlist (two to three airings per day) until it reached No. 1 on the Billboard Hot 100 chart. In the final week of March, it was in "heavy rotation", one week before the MTV debut of Jackson's "Beat It" video. Prince's "Little Red Corvette" joined both videos in heavy rotation at the end of April. At the beginning of June, "Electric Avenue" by Eddy Grant joined "Billie Jean", which was still in heavy rotation until mid-June. At the end of August, "She Works Hard for the Money" by Donna Summer was in heavy rotation on the channel. Herbie Hancock's "Rockit" and Lionel Richie's "All Night Long" were placed in heavy rotation at the end of October and the beginning of November respectively. In the final week of November, Donna Summer's "Unconditional Love" was in heavy rotation. When Jackson's elaborate video for "Thriller" was released late that year, raising the bar for what a video could be, the network's support for it was total; subsequently, more pop and R&B videos were played on MTV. Following Jackson's and Prince's breakthroughs on MTV, Rick James did several interviews where he brushed off the accomplishment as tokenism, saying in a 1983 interview, in an episode of Mike Judge Presents: Tales from the Tour Bus on James, that "any black artist that [had] their video played on MTV should pull their [videos] off MTV." Subsequent concepts HBO also had a 30-minute program of music videos called Video Jukebox, that first aired around the time of MTV's launch and lasted until late 1986. Also around this time, HBO, as well as other premium channels such as Cinemax, Showtime and The Movie Channel, occasionally played one or a few music videos between movies. SuperStation WTBS launched Night Tracks on June 3, 1983, with up to 14 hours of music video airplay each late night weekend by 1985. Its most noticeable difference was that black artists that MTV initially ignored received airplay. The program ran until the end of May 1992. A few markets also launched music-only channels including Las Vegas' KVMY (channel 21), which debuted in the summer of 1984 as KRLR-TV and branded as "Vusic 21". The first video played on that channel was "Video Killed the Radio Star", following in the footsteps of MTV. Shortly after TBS began Night Tracks, NBC launched a music video program called Friday Night Videos, which was considered network television's answer to MTV. Later renamed simply Friday Night, the program ran from 1983 to 2002. ABC's contribution to the music video program genre in 1984, ABC Rocks, was far less successful, lasting only a year. TBS founder Ted Turner started the Cable Music Channel in 1984, designed to play a broader mix of music videos than MTV's rock format allowed. But after one month as a money-losing venture, Turner sold it to MTV, who redeveloped the channel into VH1. Shortly after its launch, the Disney Channel aired a program called DTV, a play on the MTV acronym. The program used music cuts, both from past and upcoming artists. Instead of music videos, the program used clips of various vintage Disney cartoons and animated films to go with the songs. The program aired in multiple formats, sometimes between shows, sometimes as its own program, and other times as one-off specials. The specials tended to air both on the Disney Channel and NBC. The program aired at several times between 1984 and 1999. In 2009, Disney Channel revived the DTV concept with a new series of short-form segments called Re-Micks. Censorship MTV has edited a number of music videos to remove references to drugs, sex, violence, weapons, racism, homophobia, and/or advertising. Many music videos aired on the channel were either censored, moved to late-night rotation, or banned entirely from the channel. In the 1980s, parent media watchdog groups such as the Parents Music Resource Center (PMRC) criticized MTV over certain music videos that were claimed to have explicit imagery of satanism. As a result, MTV developed a strict policy on refusal to air videos that may depict Satanism or anti-religious themes. This policy led MTV to ban music videos such as "Jesus Christ Pose" by Soundgarden in 1991 and "Megalomaniac" by Incubus in 2004; however, the controversial band Marilyn Manson was among the most popular rock bands on MTV during the late 1990s and early 2000s. On September 28, 2016, on an AfterBuzz TV live stream, Scout Durwood said that MTV had a "no appropriation policy" that forbid her from wearing her hair in cornrows in an episode of Mary + Jane. She said, "I wanted to cornrow my hair, and they were like, 'That's racist.'" Trademark suit Magyar Televízió, Hungary's public broadcaster who has a trademark on the initials MTV, registered with the Hungarian copyright office, sued the American MTV (Music Television) network for trademark infringement when the Hungarian version of the music channel was launched in 2007. The suit is still ongoing. Andrew Dice Clay During the 1989 MTV Video Music Awards ceremony, comedian Andrew Dice Clay did his usual "adult nursery rhymes" routine (which he had done in his stand-up acts), after which the network executives imposed a lifetime ban. Billy Idol's music video for the song "Cradle of Love" originally had scenes from Clay's film The Adventures of Ford Fairlane when it was originally aired; scenes from the film were later excised. During the 2011 MTV Video Music Awards, Clay was in attendance where he confirmed that the channel lifted the ban. Beavis and Butt-head In the wake of controversy that involved a child burning down his house after allegedly watching Beavis and Butt-head, MTV moved the show from its original 7 p.m. time slot to an 11p.m. time slot. Also, Beavis' tendency to flick a lighter and yell "fire" was removed from new episodes, and controversial scenes were removed from existing episodes before their rebroadcast. Some extensive edits were noted by series creator Mike Judge after compiling his Collection DVDs, saying that "some of those episodes may not even exist actually in their original form." Dude, This Sucks A pilot for a show called Dude, This Sucks was canceled after teens attending a taping at the Snow Summit Ski Resort in January 2001 were sprayed with liquidized fecal matter by a group known as "The Shower Rangers". The teens later sued, with MTV later apologizing and ordering the segment's removal. Super Bowl XXXVIII halftime show After Viacom's purchase of CBS, MTV was selected to produce the Super Bowl XXXV halftime show in 2001, airing on CBS and featuring Britney Spears, NSYNC, and Aerosmith. Due to its success, MTV was invited back to produce another halftime show in 2004; this sparked a nationwide debate and controversy that drastically changed Super Bowl halftime shows, MTV's programming, and radio censorship. When CBS aired Super Bowl XXXVIII in 2004, MTV was again chosen to produce the halftime show, with performances by such artists as Nelly, P. Diddy, Janet Jackson, and Justin Timberlake. The show became controversial, however, after Timberlake tore off part of Jackson's outfit while performing "Rock Your Body" with her, revealing her right breast. All involved parties apologized for the incident, and Timberlake referred to the incident as a "wardrobe malfunction". Michael Powell, former chairman of the Federal Communications Commission, ordered an investigation the day after broadcast. In the weeks following the halftime show, MTV censored much of its programming. Several music videos, including "This Love" and "I Miss You", were edited for sexual content. In September 2004, the FCC ruled that the halftime show was indecent and fined CBS $550,000. The FCC upheld it in 2006, but federal judges reversed the fine in 2008. Nipplegate Timberlake and Jackson's controversial event gave way to a "wave of self-censorship on American television unrivaled since the McCarthy era". After the sudden event, names surfaced such as nipplegate, Janet moment, and boobgate, and this spread politically, furthering the discussion into the 2004 presidential election surrounding "moral values" and "media decency". Moral criticism The Christian right organization American Family Association has also criticized MTV from perceptions of negative moral influence, describing MTV as promoting a "pro-sex, anti-family, pro-choice, drug culture". In 2005, the Parents Television Council (PTC) released a study titled "MTV Smut Peddlers", which sought to expose excessive sexual, profane, and violent content on the channel, based on MTV's spring break programming from 2004. Jeanette Kedas, an MTV network executive, called the PTC report "unfair and inaccurate" and "underestimating young people's intellect and level of sophistication", while L. Brent Bozell III, then-president of the PTC, stated: "the incessant sleaze on MTV presents the most compelling case yet for consumer cable choice", referring to the practice of pay television companies to allow consumers to pay for channels à la carte. In April 2008, PTC released The Rap on Rap, a study covering hip-hop and R&B music videos rotated on programs 106 & Park and Rap City, both shown on BET, and Sucker Free on MTV. PTC urged advertisers to withdraw sponsorship of those programs, whose videos PTC stated targeted children and teenagers containing adult content. Jersey Shore MTV received significant criticism from Italian American organizations for Jersey Shore, which premiered in 2009. The controversy was due in large part to the manner in which MTV marketed the show, as it liberally used the word "guido" to describe the cast members. The word "guido" is generally regarded as an ethnic slur when referring to Italians and Italian Americans. One promotion stated that the show was to follow, "eight of the hottest, tannest, craziest Guidos," while yet another advertisement stated, "Jersey Shore exposes one of the tri-state area's most misunderstood species ... the GUIDO. Yes, they really do exist! Our Guidos and Guidettes will move into the ultimate beach house rental and indulge in everything the Seaside Heights, New Jersey scene has to offer." Prior to the series debut, Unico National formally requested that MTV cancel the show. In a formal letter, the company called the show a "direct, deliberate and disgraceful attack on Italian Americans." Unico National President Andre DiMino said, "MTV has festooned the 'bordello-like' house set with Italian flags and red, white and green maps of New Jersey while every other cutaway shot is of Italian signs and symbols. They are blatantly as well as subliminally bashing Italian Americans with every technique possible." Around this time, other Italian organizations joined the fight, including the NIAF and the Order Sons of Italy in America. MTV responded by issuing a press release which stated in part, "The Italian American cast takes pride in their ethnicity. We understand that this show is not intended for every audience and depicts just one aspect of youth culture." Following the calls for the show's removal, several sponsors requested that their ads not be aired during the show. These sponsors included Dell, Domino's Pizza, and American Family Insurance. Despite the loss of certain advertisers, MTV did not cancel the show. Moreover, the show saw its audience increase from its premiere in 2009, and continued to place as MTV's top-rated programs during Jersey Shore's six-season run, ending in 2012. Resolutions for White Guys In December 2016, MTV online published a social justice-oriented New Year's resolution-themed video directed towards white men. The video caused widespread outrage online, including video responses from well-known online personas, and was deleted from MTV's YouTube channel. The video was then reuploaded to their channel, with MTV claiming the new video contained "updated graphical elements". The new video quickly received over 10,000 dislikes and fewer than 100 likes from only 20,000 views, and MTV deleted the video for a second time. Social activism In addition to its regular programming, MTV has a long history of promoting social, political, and environmental activism in young people. The channel's vehicles for this activism have been Choose or Lose, encompassing political causes and encouraging viewers to vote in elections; Fight For Your Rights, encompassing anti-violence and anti-discrimination causes; think MTV; and MTV Act and Power of 12, the newest umbrellas for MTV's social activism. Choose or Lose In 1992, MTV started a pro-democracy campaign called Choose or Lose, to encourage over 20 million people to register to vote, and the channel hosted a town hall forum for then-candidate Bill Clinton. In recent years, other politically diverse programs on MTV have included True Life, which documents people's lives and problems, and MTV News specials, which center on very current events in both the music industry and the world. One special show covered the 2004 US presidential election, airing programs focused on the issues and opinions of young people, including a program where viewers could ask questions of Senator John Kerry. MTV worked with P. Diddy's "Citizen Change" campaign, designed to encourage young people to vote. Additionally, MTV aired a documentary covering a trip by the musical group Sum 41 to the Democratic Republic of the Congo, documenting the conflict there. The group ended up being caught in the midst of an attack outside of the hotel and were subsequently flown out of the country. The channel also began showing presidential campaign commercials for the first time during the 2008 US presidential election. This has led to criticism, with Jonah Goldberg opining that "MTV serves as the Democrats' main youth outreach program." Rock the Vote MTV is aligned with Rock the Vote, a campaign to motivate young adults to register and vote. MTV Act and Power of 12 In 2012, MTV launched MTV Act and Power of 12, its current social activism campaigns. MTV Act focuses on a wide array of social issues, while Power of 12 was a replacement for MTV's Choose or Lose and focused on the 2012 US presidential election. Elect This In 2016, MTV continued its pro-democracy campaign with Elect This, an issue-oriented look at the 2016 election targeting Millennials. Original content under the "Elect This" umbrella includes "Infographica," short animations summarizing MTV News polls; "Robo-Roundtable," a digital series hosted by animatronic robots; "The Racket," a multi-weekly digital series; and "The Stakes," a weekly political podcast. Beyond MTV Since its launch in 1981, the brand "MTV" has expanded to include many additional properties beyond the original MTV channel, including a variety of sister channels in the US, dozens of affiliated channels around the world, and an Internet presence through MTV.com and related websites. Sister channels in the US MTV operates a group of channels under MTV Networksa name that continues to be used for the individual units of the now Paramount Media Networks, a division of corporate parent Paramount Global. In 1985, MTV saw the introduction of its first regular sister channel, VH1, which was originally an acronym for "Video Hits One" and was designed to play adult contemporary music videos. From now on, VH1 is aimed at celebrity and popular culture programming which include many reality shows. Another sister channel, CMT, targets the and southern culture market. The advent of satellite television and digital cable brought MTV greater channel diversity, including its current sister channels MTV2 and Spanish-speaking MTV Tr3́s (now Tr3́s), which initially played music videos exclusively but now focus on other programming. MTV also formerly broadcast MTVU on campuses at various universities until 2018, when the MTV Networks on Campus division was sold, and the channel remained as a digital cable channel only. MTV used to also have MTV Hits and MTVX channels until these were converted into NickMusic and MTV Jams, respectively. MTV Jams was later rebranded as BET Jams in 2015. In the 2000s, MTV launched MTV HD, a 1080i high definition simulcast feed of MTV. Until Viacom's main master control was upgraded in 2013, only the network's original series after 2010 (with some pre-2010 content) are broadcast in high definition, while music videos, despite being among the first television works to convert to high definition presentation in the early 2000s, were presented in 4:3 standard definition, forcing them into a windowboxing type of presentation; since that time all music videos are presented in HD, and are framed to their director's preference. Jersey Shore, despite being shot with widescreen HD cameras, was also presented with SD windowboxing (though the 2018 Family Vacation revival is in full HD). The vast majority of providers carry MTV HD. MTV Networks also operates MTV Live, a high-definition channel that features original HD music programming and HD versions of music related programs from MTV, VH1 and CMT. The channel was launched in January 2006 as MHD (Music: High Definition). The channel was officially rebranded as MTV Live on February 1, 2016. In 2005 and 2006, MTV launched a list of channels for Asian Americans. The first channel was MTV Desi, launched in July 2005, dedicated towards Indian Americans. Next was MTV Chi, in December 2005, which catered to Chinese Americans. The third was MTV K, launched in June 2006 and targeted toward Korean Americans. Each of these channels featured music videos and shows from MTV's international affiliates as well as original US programming, promos, and packaging. All three of these channels ceased broadcasting on April 30, 2007. On August 1, 2016, the 35th anniversary of the original MTV's launch, VH1 Classic was rebranded as MTV Classic. The channel's programming focused on classic music videos and programming (including notable episodes of MTV Unplugged and VH1 Storytellers), but skews more towards the 1980s, 1990s and 2000s. The network aired encores of 2000s MTV series such as Beavis and Butt-Head and Laguna Beach: The Real Orange County. The network's relaunch included a broadcast of MTV's first hour on the air, which was also simulcast on MTV and online via Facebook live streaming. MTV Classic |
typically small animals with elongated bodies, short legs, short, round ears, and thick fur. Most mustelids are solitary, nocturnal animals, and are active year-round. With the exception of the sea otter, they have anal scent glands that produce a strong-smelling secretion the animals use for sexual signaling and marking territory. Most mustelid reproduction involves embryonic diapause. The embryo does not immediately implant in the uterus, but remains dormant for some time. No development takes place as long as the embryo remains unattached to the uterine lining. As a result, the normal gestation period is extended, sometimes up to a year. This allows the young to be born under favorable environmental conditions. Reproduction has a large energy cost, so it is to a female's benefit to have available food and mild weather. The young are more likely to survive if birth occurs after previous offspring have been weaned. Mustelids are predominantly carnivorous, although some eat vegetable matter at times. While not all mustelids share an identical dentition, they all possess teeth adapted for eating flesh, including the presence of shearing carnassials. One characteristic trait is a meat-shearing upper-back molar that is rotated 90°, towards the inside of the mouth. With variation between species, the most common dental formula is . Ecology The fisher, tayra, and martens are partially arboreal, while badgers are fossorial. A number of mustelids have aquatic lifestyles, ranging from semiaquatic minks and river otters to the fully aquatic sea otter, which is one of the few nonprimate mammals known to use tools while foraging. It uses "anvil" stones to crack open the shellfish that form a significant part of its diet. It is a "keystone species", keeping its prey populations in balance so some do not outcompete the others and destroy the kelp in which they live. The black-footed ferret is entirely dependent on another keystone species, the prairie dog. A family of four ferrets eats 250 prairie dogs in a year; this requires a stable population of prairie dogs from an area of some . Classification Skunks were formerly included as a subfamily of the mustelids, but are now regarded as a separate family (Mephitidae). Mongooses bear a striking resemblance to many mustelids, but belong to a distinctly different suborder—the Feliformia (all those carnivores sharing more recent origins with the cats) and not the Caniformia (those sharing more recent origins with the dogs). Because mongooses and mustelids occupy similar ecological niches, convergent evolution has led to similarity in form and behavior. Diversity The oldest known mustelid from North America is Corumictis wolsani from the early and late Oligocene (early and late Arikareean, Ar1–Ar3) of Oregon. Middle Oligocene Mustelictis from Europe might be a mustelid, as well. Other early fossils of the mustelids were dated at the end of the Oligocene to the beginning of the Miocene. Which of these forms are Mustelidae ancestors and which should be considered the first mustelids is unclear. The fossil record indicates that mustelids appeared in the late Oligocene period (33 Mya) | into eight subfamilies in 22 genera: Subfamily Taxidiinae Genus Taxidea American badger, T. taxus Subfamily Mellivorinae Genus Mellivora Honey badger, M. capensis Subfamily Melinae Genus Arctonyx Northern hog badger, A. albogularis Greater hog badger, A. collaris Sumatran hog badger, A. hoevenii Genus Meles Japanese badger, M. anakuma Asian badger, M. leucurus European badger, M. meles Caucasian badger, M. canescens Subfamily Helictidinae Genus Melogale Vietnam ferret-badger, M. cucphuongensis Bornean ferret-badger, M. everetti Chinese ferret-badger, M. moschata Javan ferret-badger, M. orientalis Burmese ferret-badger, M. personata Formosan ferret-badger, M. subaurantiaca Subfamily Guloninae Genus Eira Tayra, E. barbara Genus Gulo Wolverine, G. gulo Genus Martes American marten, M. americana Pacific marten, M. caurina Yellow-throated marten, M. flavigula Beech marten, M. foina Nilgiri marten, M. gwatkinsii European pine marten, M. martes Japanese marten, M. melampus Sable, M. zibellina Genus Pekania Fisher, P. pennanti Subfamily Ictonychinae Genus Galictis Lesser grison, G. cuja Greater grison, G. vittata Genus Ictonyx Saharan striped polecat, I. libycus Striped polecat, I. striatus Genus Lyncodon Patagonian weasel, L. patagonicus Genus Poecilogale African striped weasel, P. albinucha Genus Vormela Marbled polecat, V. peregusna Subfamily Lutrinae (otters) Genus Aonyx African clawless otter, A. capensis Asian small-clawed otter, A. cinerea Congo clawless otter, A. congicus Genus Enhydra Sea otter, E. lutris Genus Lontra North American river otter, L. canadensis Marine otter, L. felina Neotropical otter, L. longicaudis Southern river otter, L. provocax Genus Lutra Eurasian otter, L. lutra Hairy-nosed otter, L. sumatrana Japanese otter. L. nippon Genus Hydrictis Spotted-necked otter, H. maculicollis Genus Lutrogale Smooth-coated otter, L. perspicillata Genus Pteronura Giant otter, P. brasiliensis Subfamily Mustelinae (weasels, ferrets, and mink) Genus Mustela Mountain weasel, M. altaica Stoat (Beringian ermine), M. erminea Steppe polecat, M. eversmannii Domestic ferret, M. furo Haida ermine, M. haidarum Japanese weasel, M. itatsi Yellow-bellied weasel, M. kathiah European mink, M. lutreola Indonesian mountain weasel, M. lutreolina Black-footed ferret, M. nigripes Least weasel, M. nivalis Malayan weasel, M. nudipes European polecat, M. putorius American ermine, M. richardsonii Siberian weasel, M. sibirica Back-striped weasel, M. strigidorsa Genus Neogale Amazon weasel, N. africana Colombian weasel, N. felipei Long-tailed weasel, N. frenata American mink, N. vison Sea mink, N. macrodon Fossil mustelids Extinct genera of the family Mustelidae include: Brachypsalis Chamitataxus Corumictis Cyrnaonyx Ekorus Eomellivora Hoplictis Megalictis Oligobunis Plesictis Sthenictis Teruelictis Trochictis Phylogeny Multigene phylogenies constructed by Koepfli et al. (2008) and Law et al. (2018) found that Mustelidae comprises nine subfamilies. The early mustelids appear to have undergone two rapid bursts of diversification in Eurasia, with the resulting species spreading to other continents only later. Mustelid species diversity is often attributed to an adaptive radiation coinciding with the mid-Miocene climate transition. Contrary to expectations, Law et al. (2018) found no evidence for rapid bursts of lineage diversification at the origin of the Mustelidae, and further analyses of lineage diversification rates using molecular and fossil-based methods did not find associations between rates of lineage diversification and mid-Miocene climate transition as previously hypothesized. See also List of heaviest extant mustelids References Further reading External links "The Mighty Weasel" February 19, 2020 Nature Carnivorans Mammal families Extant Oligocene first appearances Taxa named by |
Aspen Hill, Hyattsville/Langley Park, Glenmont/Wheaton, Bladensburg, Riverdale Park, Gaithersburg, as well as Highlandtown and Greektown in East Baltimore. Salvadorans are the largest Hispanic group in Maryland. Other Hispanic groups with significant populations in the state include Mexicans and Puerto Ricans and Hondurans. Though the Salvadoran population is more concentrated in the area around Washington, D.C., and the Puerto Rican population is more concentrated in the Baltimore area, all other major Hispanic groups in the state are evenly dispersed between these two areas. Maryland has one of the most diverse Hispanic populations in the country, with significant populations from various Caribbean and Central American nations. Asian Americans are concentrated in the suburban counties surrounding Washington, D.C., and in Howard County, with Korean American and Taiwanese American communities in Rockville, Gaithersburg, and Germantown and a Filipino American community in Fort Washington. Numerous Indian Americans live across the state, especially in central Maryland. Attracting educated Asians and Africans to the professional jobs in the region, Maryland has the fifth-largest proportions of racial minorities in the country. In 2006, 645,744 were counted as foreign born, which represents mainly people from Latin America and Asia. About four percent are undocumented immigrants. Maryland also has a large Korean American population. In fact, 1.7 percent are Korean, while as a whole, 6.7 percent are Asian. According to The Williams Institute's analysis of the 2010 U.S. census, 12,538 same-sex couples are living in Maryland, representing 5.8 same-sex couples per 1,000 households. In 2019, non-Hispanic white Americans were 49.8% of Maryland's population (White Americans, including White Hispanics, were 57.3%), which made Maryland a majority minority state. 50.2% of Maryland's population is non-white, or is Hispanic or Latino, the highest percentage of any state on the East Coast, and the highest percentage after the majority-minority states of Hawaii, New Mexico, Texas, California, and Nevada. By 2031, minorities are projected to become the majority of voting eligible residents of Maryland. Religion Maryland has been historically prominent to American Catholic tradition because the English colony of Maryland was intended by George Calvert as a haven for English Catholics. Baltimore was the seat of the first Catholic bishop in the U.S. (1789), and Emmitsburg was the home and burial place of the first American-born citizen to be canonized, St. Elizabeth Ann Seton. Georgetown University, the first Catholic University, was founded in 1789 in what was then part of Maryland. The Basilica of the National Shrine of the Assumption of the Virgin Mary in Baltimore was the first Roman Catholic cathedral built in the United States, and the Archbishop of Baltimore is, albeit without formal primacy, the United States' quasi-primate, and often a cardinal. Among the immigrants of the 19th and 20th centuries from eastern and southern Europe were many Catholics. Despite its historic relevance to the Catholic Church in the United States, the percentage of Catholics in the state of Maryland is below the national average of 20%. Demographically, both Protestants and those identifying with no religion are more numerous than Catholics. According to the Pew Research Center, 69 percent of Maryland's population identifies themselves as Christian. Nearly 52% of the adult population are Protestants. Following Protestantism, Catholicism is the second largest religious affiliation, comprising 15% percent of the population. Amish/Mennonite communities are found in St. Mary's, Garrett, and Cecil counties. Judaism is the largest non-Christian religion in Maryland, with 241,000 adherents, or four percent of the total population. Jews are numerous throughout Montgomery County and in Pikesville and Owings Mills northwest of Baltimore. An estimated 81,500 Jewish Americans live in Montgomery County, constituting approximately 10% of the total population. The Seventh-day Adventist Church's world headquarters and Ahmadiyya Muslims' national headquarters are located in Silver Spring, just outside the District of Columbia. Economy The Bureau of Economic Analysis estimates that Maryland's gross state product in 2016 was $382.4billion. However, Maryland has been using Genuine Progress Indicator, an indicator of well-being, to guide the state's development, rather than relying only on growth indicators like GDP. According to the U.S. Census Bureau, Maryland households are currently the wealthiest in the country, with a 2013 median household income of $72,483 which puts it ahead of New Jersey and Connecticut, which are second and third respectively. Two of Maryland's counties, Howard and Montgomery, are the second and eleventh wealthiest counties in the nation respectively. Maryland has the most millionaires per capita in 2013, with a ratio of 7.7 percent. Also, the state's poverty rate of 7.8 percent is the lowest in the country. per capita personal income in 2006 was $43,500, fifth in the nation. As of February 2018, the state's unemployment rate was 4.2 percent. Maryland's economy benefits from the state's proximity to the federal government in Washington, D.C., with an emphasis on technical and administrative tasks for the defense/aerospace industry and bio-research laboratories, as well as staffing of satellite government headquarters in the suburban or exurban Baltimore/Washington area. Ft. Meade serves as the headquarters of the Defense Information Systems Agency, United States Cyber Command, and the National Security Agency/Central Security Service. In addition, a number of educational and medical research institutions are located in the state. In fact, the various components of The Johns Hopkins University and its medical research facilities are now the largest single employer in the Baltimore area. Altogether, white collar technical and administrative workers comprise 25 percent of Maryland's labor force, attributable in part to nearby Maryland being a part of the Washington Metro Area where the federal government office employment is relatively high. Manufacturing, while large in dollar value, is highly diversified with no sub-sector contributing over 20 percent of the total. Typical forms of manufacturing include electronics, computer equipment, and chemicals. The once-mighty primary metals sub-sector, which once included what was then the largest steel factory in the world at Sparrows Point, still exists, but is pressed with foreign competition, bankruptcies, and mergers. During World War II the Glenn Martin Company (now part of Lockheed Martin) airplane factory employed some 40,000 people. Mining other than construction materials is virtually limited to coal, which is located in the mountainous western part of the state. The brownstone quarries in the east, which gave Baltimore and Washington much of their characteristic architecture in the mid-19th century, were once a predominant natural resource. Historically, there used to be small gold-mining operations in Maryland, some near Washington, but these no longer exist. Baltimore port One major service activity is transportation, centered on the Port of Baltimore and its related rail and trucking access. The port ranked 17th in the U.S. by tonnage in 2008. Although the port handles a wide variety of products, the most typical imports are raw materials and bulk commodities, such as iron ore, petroleum, sugar, and fertilizers, often distributed to the relatively close manufacturing centers of the inland Midwest via good overland transportation. The port also receives several brands of imported motor vehicles and is the number one auto port in the U.S. Baltimore City is among the top 15 largest ports in the nation, and was one of six major U.S. ports that were part of the February 2006 controversy over the Dubai Ports World deal. The state as a whole is heavily industrialized, with a booming economy and influential technology centers. Its computer industries are some of the most sophisticated in the United States, and the federal government has invested heavily in the area. Maryland is home to several large military bases and scores of high-level government jobs. The Chesapeake and Delaware Canal is a canal on the Eastern Shore that connects the waters of the Delaware River with those of the Chesapeake Bay, and in particular with the Port of Baltimore, carrying 40 percent of the port's ship traffic. Agriculture and fishing Maryland has a large food-production sector. A large component of this is commercial fishing, centered in the Chesapeake Bay, but also including activity off the short Atlantic seacoast. The largest catches by species are the blue crab, oysters, striped bass, and menhaden. The Bay also has overwintering waterfowl in its wildlife refuges. The waterfowl support a tourism sector of sportsmen. Maryland has large areas of fertile agricultural land in its coastal and Piedmont zones, though this land use is being encroached upon by urbanization. Agriculture is oriented to dairy farming (especially in foothill and piedmont areas) for nearby large city milksheads, plus specialty perishable horticulture crops, such as cucumbers, watermelons, sweet corn, tomatoes, muskmelons, squash, and peas (Source:USDA Crop Profiles). The southern counties of the western shoreline of Chesapeake Bay are warm enough to support a tobacco cash crop zone, which has existed since early Colonial times, but declined greatly after a state government buy-out in the 1990s. There is also a large automated chicken-farming sector in the state's southeastern part; Salisbury is home to Perdue Farms. Maryland's food-processing plants are the most significant type of manufacturing by value in the state. Biotechnology Maryland is a major center for life sciences research and development. With more than 400 biotechnology companies located there, Maryland is the fourth-largest nexus in this field in the United States. Institutions and government agencies with an interest in research and development located in Maryland include the Johns Hopkins University, the Johns Hopkins Applied Physics Laboratory, more than one campus of the University System of Maryland, Goddard Space Flight Center, the United States Census Bureau, the National Institutes of Health (NIH), the National Institute of Standards and Technology (NIST), the National Institute of Mental Health (NIMH), the Walter Reed National Military Medical Center, the federal Food and Drug Administration (FDA), the Howard Hughes Medical Institute, the Celera Genomics company, the J. Craig Venter Institute (JCVI), and AstraZeneca (formerly MedImmune). Maryland is home to defense contractor Emergent BioSolutions, which manufactures and provides an anthrax vaccine to U.S. government military personnel. Tourism Tourism is popular in Maryland. Many tourists visit Baltimore, the beaches of the Eastern Shore, and the nature of western Maryland. Attractions in Baltimore include the Harborplace, the Baltimore Aquarium, Fort McHenry, as well as the Camden Yards baseball stadium. Ocean City on the Atlantic Coast has been a popular beach destination in summer, particularly since the Chesapeake Bay Bridge was built in 1952 connecting the Eastern Shore to the more populated Maryland cities. The state capital of Annapolis offers sites such as the state capitol building, the historic district, and the waterfront. Maryland also has several sites of interest to military history, given Maryland's role in the American Civil War and in the War of 1812. Other attractions include the historic and picturesque towns along the Chesapeake Bay, such as Saint Mary's, Maryland's first colonial settlement and original capital. Healthcare As of 2017, the top two health insurers including all types of insurance were CareFirst BlueCross BlueShield with 47% market share followed by UnitedHealth Group at 15%. Maryland has experimented with healthcare payment reforms, notably beginning in the 1970s with an all-payer rate setting program regulated by the Health Services Cost Review Commission. In 2014, it switched to a global budget revenue system, whereby hospitals receive a capitated payment to care for their population. Transportation The Maryland Department of Transportation oversees most transportation in the state through its various administration-level agencies. The independent Maryland Transportation Authority maintains and operates the state's eight toll facilities. Roads Maryland's Interstate highways include of Interstate95 (I-95), which enters the northeast portion of the state, travels through Baltimore, and becomes part of the eastern section of the Capital Beltway to the Woodrow Wilson Bridge. I-68 travels , connecting the western portions of the state to I-70 at the small town of Hancock. I-70 enters from Pennsylvania north of Hancock and continues east for to Baltimore, connecting Hagerstown and Frederick along the way. I-83 has in Maryland and connects Baltimore to southern central Pennsylvania (Harrisburg and York, Pennsylvania). Maryland also has an portion of I-81 that travels through the state near Hagerstown. I-97, fully contained within Anne Arundel County and the shortest () one- or two-digit interstate highway in the contiguous US, connects the Baltimore area to the Annapolis area. There are also several auxiliary Interstate highways in Maryland. Among them are two beltways encircling the major cities of the region: I-695, the McKeldin (Baltimore) Beltway, which encircles Baltimore; and a portion of I-495, the Capital Beltway, which encircles Washington, D.C. I-270, which connects the Frederick area with Northern Virginia and the District of Columbia through major suburbs to the northwest of Washington, is a major commuter route and is as wide as fourteen lanes at points. I-895, also known as the Harbor Tunnel Thruway, provides an alternate route to I-95 across the Baltimore Harbor. Both I-270 and the Capital Beltway were extremely congested; however, the Intercounty Connector (ICC; MD200) has alleviated some congestion over time. Construction of the ICC was a major part of the campaign platform of former Governor Robert Ehrlich, who was in office from 2003 until 2007, and of Governor Martin O'Malley, who succeeded him. I-595, which is an unsigned highway concurrent with US50/US301, is the longest unsigned interstate in the country and connects Prince George's County and Washington, D.C. with Annapolis and the Eastern Shore via the Chesapeake Bay Bridge. Maryland also has a state highway system that contains routes numbered from 2through 999, however most of the higher-numbered routes are either unsigned or are relatively short. Major state highways include Routes 2 (Governor Ritchie Highway/Solomons Island Road/Southern Maryland Blvd.), 4 (Pennsylvania Avenue/Southern Maryland Blvd./Patuxent Beach Road/St. Andrew's Church Road), 5 (Branch Avenue/Leonardtown Road/Point Lookout Road), 32, 45 (York Road), 97 (Georgia Avenue), 100 (Paul T. Pitcher Memorial Highway), 210 (Indian Head Highway), 235 (Three Notch Road), 295 (Baltimore-Washington Parkway), 355 (Wisconsin Avenue/Rockville Pike/Frederick Road), 404 (Queen Anne Highway/ Shore Highway), and 650 (New Hampshire Avenue). Airports Maryland's largest airport is Baltimore-Washington International Thurgood Marshall Airport, more commonly referred to as BWI. The airport is named for the Baltimore-born Thurgood Marshall, the first African-American Supreme Court justice. The only other airports with commercial service are at Hagerstown and Salisbury. The Maryland suburbs of Washington, D.C. are also served by the other two airports in the region, Ronald Reagan Washington National Airport and Dulles International Airport, both in Northern Virginia. The College Park Airport is the nation's oldest, founded in 1909, and is still used. Wilbur Wright trained military aviators at this location. Rail Amtrak trains, including the high-speed Acela Express serve Baltimore's Penn Station, BWI Airport, New Carrollton, and Aberdeen along the Washington, D.C. to Boston Northeast Corridor. In addition, train service is provided to Rockville and Cumberland by Amtrak's Washington, D.C., to Chicago Capitol Limited. The WMATA's Metrorail rapid transit and Metrobus local bus systems (the 2nd and 6th busiest in the nation of their respective modes) provide service in Montgomery and Prince George's counties and connect them to Washington, D.C., with the express Metrobus Route B30 serving BWI Airport. The Maryland Transit Administration (often abbreviated as "MTA Maryland"), a state agency part of the Maryland Department of Transportation also provides transit services within the state. Headquartered in Baltimore, MTA's transit services are largely focused on central Maryland, as well as some portions of the Eastern Shore and Southern MD. Baltimore's Light RailLink and Metro SubwayLink systems serve its densely populated inner-city and the surrounding suburbs. The MTA also serves the city and its suburbs with its local bus service (the 9th largest system in the nation). The MTA's Commuter Bus system provides express coach service on longer routes connecting Washington, D.C. and Baltimore to parts of Central and Southern MD as well as the Eastern Shore. The commuter rail service, known as MARC, operates three lines which all terminate at Washington Union Station and provide service to Baltimore's Penn and Camden stations, Perryville, Frederick, and Martinsburg, WV. In addition, many suburban counties operate local bus systems which connect to and complement the larger MTA and WMATA/Metro services. The MTA will also administer the Purple Line, an under-construction light rail line that will connect the Maryland branches of the Red, Green/Yellow, and Orange lines of the Washington Metro, as well as offer transfers to all three lines of the MARC commuter rail system. Freight rail transport is handled principally by two Class I railroads, as well as several smaller regional and local carriers. CSX Transportation has more extensive trackage throughout the state, with , followed by Norfolk Southern Railway. Major rail yards are located in Baltimore and Cumberland, with an intermodal terminal (rail, truck and marine) in Baltimore. Law and government The government of Maryland is conducted according to the state constitution. The government of Maryland, like the other 49 state governments, has exclusive authority over matters that lie entirely within the state's borders, except as limited by the Constitution of the United States. Power in Maryland is divided among three branches of government: executive, legislative, and judicial. The Maryland General Assembly is composed of the Maryland House of Delegates and the Maryland Senate. Maryland's governor is unique in the United States as the office is vested with significant authority in budgeting. The legislature may not increase the governor's proposed budget expenditures. Unlike many other states, significant autonomy is granted to many of Maryland's counties. Most of the business of government is conducted in Annapolis, the state capital. Elections for governor and most statewide offices, as well as most county elections, are held in midterm-election years (even-numbered years not divisible by four). The judicial branch of state government consists of one united District Court of Maryland that sits in every county and Baltimore City, as well as 24 Circuit Courts sitting in each County and Baltimore City, the latter being courts of general jurisdiction for all civil disputes over $30,000, all equitable jurisdiction and major criminal proceedings. The intermediate appellate court is known as the Court of Special Appeals and the state supreme court is the Court of Appeals. The appearance of the judges of the Maryland Court of Appeals is unique; Maryland is the only state whose judges wear red robes. Taxation Maryland imposes five income tax brackets, ranging from 2to 6.25 percent of personal income. The city of Baltimore and Maryland's 23 counties levy local "piggyback" income taxes at rates between 1.25 and 3.2 percent of Maryland taxable income. Local officials set the rates and the revenue is returned to the local governments quarterly. The top income tax bracket of 9.45 percent is the fifth highest combined state and local income tax rates in the country, behind New York City's 11.35 percent, California's 10.3 percent, Rhode Island's 9.9 percent, and Vermont's 9.5 percent. Maryland's state sales tax is six percent. All real property in Maryland is subject to the property tax. Generally, properties that are owned and used by religious, charitable, or educational organizations or property owned by the federal, state or local governments are exempt. Property tax rates vary widely. No restrictions or limitations on property taxes are imposed by the state, meaning cities and counties can set tax rates at the level they deem necessary to fund governmental services. Elections Since before the Civil War, Maryland's elections have been largely controlled by the Democrats, which account for 54.9% of all registered voters as of May 2017. State elections are dominated by Baltimore and the populous suburban counties bordering Washington, D.C., and Baltimore: Montgomery, Prince George's, Anne Arundel, and Baltimore counties. As of July 2017, sixty-six percent of the state's population resides in these six jurisdictions, most of which contain large, traditionally Democratic Voting blocs: African Americans in Baltimore City and Prince George's, federal employees in Prince George's, Anne Arundel, and Montgomery, and post-graduates in Montgomery. The remainder of the state, particularly Western Maryland and the Eastern Shore, is more supportive of Republicans. One of Maryland's best known political figures is a Republican — former governor Spiro Agnew, who pled no contest to tax evasion and resigned in 1973. In 1980, Maryland was one of six states to vote for Jimmy Carter. In 1992, Bill Clinton fared better in Maryland than any other state, except his home state of Arkansas. In 1996, Maryland was Clinton's sixth best; in 2000, Maryland ranked fourth for Gore; and in 2004, John Kerry showed his fifth-best performance in Maryland. In 2008, Barack Obama won the state's 10 electoral votes with 61.9 percent of the vote, to John McCain's 36.5 percent. In 2002, former Governor Robert Ehrlich was the first Republican to be elected to that office in four decades, and after one term, he lost his seat to Baltimore Mayor and Democrat Martin O'Malley. Ehrlich ran again for governor in 2010, losing again to O'Malley. The 2006 election brought no change in the pattern of Democratic dominance. After Democratic Senator Paul Sarbanes announced that he was retiring, Democratic Congressman Benjamin Cardin defeated Republican Lieutenant Governor Michael S. Steele, with 55 percent of the vote, against Steele's 44 percent. While Republicans usually win more counties, by piling up large margins in the west and east, they are also usually swamped by the more densely populated and heavily Democratic Baltimore–Washington axis. In 2008, for instance, McCain won 17 counties to Obama's six; Obama also carried Baltimore City. While McCain won most of the western and eastern counties by margins of 2-to-1 or more, he was almost completely shut out in the larger counties surrounding Baltimore and Washington; every large county, except Anne Arundel, went for Obama. From 2007 to 2011, U.S. Congressman Steny Hoyer (MD-5), a Democrat, was elected as Majority Leader for the 110th Congress and 111th Congress of the House of Representatives, serving in that post again starting in 2019. In addition, Hoyer served as House Minority Whip from 2003 to 2006 and 2012 to 2018. His district covers parts of Anne Arundel and Prince George's counties, in addition to all of Charles, Calvert, and St. Mary's counties in southern Maryland. In 2010, Republicans won control of most counties. The Democratic Party remained in control of eight county governments, including that of Baltimore. In 2014, Larry Hogan, a moderate Republican, was elected Governor of Maryland. Hogan is the second Republican to become the Governor of Maryland since Spiro Agnew, who resigned in 1969 to become vice president. In 2018, Hogan was re-elected to a second term of office. Per the Constitution of Maryland, Hogan is term-limited, and may not run for a third consecutive term in the 2022 Maryland gubernatorial election. In a 2020 study, Maryland was ranked by the Election Law Journal as the 5th easiest state for citizens to vote in. LGBT rights and community The first person known to describe himself as a drag queen was William Dorsey Swann, born enslaved in Hancock, Maryland. Swann was the first American on record who pursued legal and political action to defend the LGBTQ community's right to assemble. In February 2010, Attorney General Doug Gansler issued an opinion stating that Maryland law should honor same-sex marriages from out of state. At the time, the state Supreme Court wrote a decision upholding marriage discrimination. On March 1, 2012, Maryland Governor Martin O'Malley signed the freedom to marry bill into law after it passed in the state legislature. Immediately after, opponents of same-sex marriage began collecting signatures to overturn the law. The law was scheduled to face a referendum, as Question 6, in the November 2012 election. In May 2012, Maryland's Court of Appeals ruled that the state will recognize marriages of same-sex couples who married out-of-state, no matter the outcome of the November election. Voters voted 52% to 48% for Question6 on November 6, 2012. Same-sex couples began marrying in Maryland on January 1, 2013. A large majority (57%) of Maryland voters said they would vote to uphold the freedom to marry at the ballot in November 2012, with 37% saying they would vote against marriage for all couples. This is consistent with a January 2011 Gonzales Research & Marketing Strategies poll showing 51% support for marriage in the state. Media A well-known newspaper is The Baltimore Sun. Many residents of the Washington metropolitan area receive The Washington Post. The most populous areas are served by either Baltimore or Washington, D.C. broadcast stations. The Eastern Shore is served primarily by broadcast media based around the Delmarva Peninsula; the northeastern section receives both Baltimore and Philadelphia stations. Garrett County, which is mountainous, is served by stations from Pittsburgh, and requires cable or satellite for reception. Maryland is served by statewide PBS member station Maryland Public Television (MPT). Education Primary and secondary education Education Week ranked Maryland #1 in its nationwide 2009–2013 Quality Counts reports. The College Board's 9th Annual AP Report to the Nation also ranked Maryland first. Primary and secondary education in Maryland is overseen by the Maryland State Department of Education, which is headquartered in Baltimore. The highest educational official in the state is the State Superintendent of Schools, who is appointed by the State Board of Education to a four-year term of office. The Maryland General Assembly has given the Superintendent and State Board autonomy to make educationally related decisions, limiting its influence on the day-to-day functions of public education. Each county and county-equivalent in Maryland has a local Board of Education charged with running the public schools in that particular jurisdiction. The budget for education was $5.5billion in 2009, representing about 40 percent of the state's general fund. Data from the 2017 census shows that, among large school districts, four Maryland districts are in the top six for per-pupil annual spending, exceeded only by the Boston and New York City districts. Maryland has a broad range of private primary and secondary schools. Many of these are affiliated with various religious sects, including parochial schools of the Catholic Church, Quaker schools, Seventh-day Adventist schools, and Jewish schools. In 2003, Maryland law was changed to allow for the creation of publicly funded charter schools, although the charter schools must be approved by their local Board of Education and are not exempt from state laws on education, including collective bargaining laws. In 2008, the state led the entire country in the percentage of students passing Advanced Placement examinations. 23.4 percent of students earned passing grades on the AP tests given in May 2008. This marks the first year that Maryland earned this honor. Three Maryland high schools (in Montgomery County) were ranked among the top 100 in the country by US News in 2009, based in large part on AP test scores. Colleges and universities Maryland has several historic and renowned private colleges and universities, the most prominent of which is Johns Hopkins University, founded in 1876 with a grant from Baltimore entrepreneur Johns Hopkins. The first public university in the state is the University of Maryland, Baltimore, which was founded in 1807 and contains the University of Maryland's only public academic health, human services, and one of two law centers (the other being the University of Baltimore School of Law). Seven professional and graduate schools train the majority of the state's physicians, nurses, dentists, lawyers, social workers, and pharmacists. The flagship university and largest undergraduate institution in Maryland is the University of Maryland, College Park which was founded as the Maryland Agricultural College in 1856 and became a public land grant college in 1864. Towson University, founded in 1866, is the state's second largest university. In 1974, Maryland, along with seven other states, mainly in the South, submitted plans to desegregate its state universities; Maryland's plans were approved by the U.S. Department of Health, Education and Welfare. Baltimore is home to the University of Maryland, Baltimore County and the Maryland Institute College of Art. The majority of public universities in the state (Bowie State University, Coppin State University, Frostburg State University, Salisbury University and the University of Maryland-Eastern Shore) are affiliated with the University System of Maryland. Two state-funded institutions, Morgan State University and St. Mary's College of Maryland, as well as two federally funded institutions, the Uniformed Services University of the Health Sciences and the United States Naval Academy, are not affiliated with the University System of Maryland. The University of Maryland Global Campus is the largest public university in Maryland and one of the largest distance-learning institutions in the world. St. John's College in Annapolis and Washington College in Chestertown, both private institutions, are the oldest colleges in the state and among the oldest in the country. Other private institutions include Mount St. Mary's University, McDaniel College (formerly known as Western Maryland College), Hood College, Stevenson University (formerly known as Villa Julie College), Loyola University Maryland, and Goucher College, among others. Public libraries Maryland's 24 public library systems deliver public education for everyone in the state of Maryland through a curriculum that comprises three pillars: Self-Directed Education (books and materials in all formats, e-resources), Research Assistance & Instruction (individualized research assistance, classes for students of all ages), and Instructive & Enlightening Experiences (e.g., Summer Reading Clubs, author events). Maryland's library systems include, in part: Baltimore County Public Library System Cecil County Public Library Enoch Pratt Free Library Frederick County Public Library Harford County Public Library Howard County Public Library Montgomery County Public Libraries Prince George's County Memorial Library System St. Mary's County Public Library Many of the library systems have established formalized partnerships with other educational institutions in their counties and regions. Sports With two major metropolitan areas, Maryland has a number of major and minor professional sports franchises. Two National Football League teams play in Maryland, the Baltimore Ravens in Baltimore and the Washington Commanders in Landover. The Baltimore Colts represented the NFL in Baltimore from 1953 to 1983 before moving to Indianapolis. The Baltimore Orioles are the state's Major League Baseball franchise. The National Hockey League's Washington Capitals and the National Basketball Association's Washington Wizards formerly played in Maryland, until the construction of an arena in Washington, D.C. in 1997 (now known as Capital One Arena). University of Maryland's team is the Maryland Terrapins. Maryland enjoys considerable historical repute for the talented sports players of its past, including Cal Ripken Jr. and Babe Ruth. In 2012, The Baltimore Sun published a list of Maryland's top ten athletes in the state's history. The list includes Babe Ruth, Cal Ripken Jr, Johnny Unitas, Brooks Robinson, Frank Robinson, Ray Lewis, Michael Phelps, Jimmie Foxx, Jim Parker, and Wes Unseld. Other professional sports franchises in the state include three affiliated minor league baseball teams, one independent league baseball team, the Baltimore Blast indoor soccer team, two indoor football teams, three low-level outdoor soccer teams, and the Chesapeake Bayhawks of Major League Lacrosse. Maryland is also home to one of the three races | parties); obtained closed voting booths to prevent party workers from "assisting" voters; initiated primary elections to keep party bosses from selecting candidates; and had candidates listed without party symbols, which discouraged the illiterate from participating. These measures worked against ill-educated whites and blacks. Blacks resisted such efforts, with suffrage groups conducting voter education. Blacks defeated three efforts to disenfranchise them, making alliances with immigrants to resist various Democratic campaigns. Disenfranchisement bills in 1905, 1907, and 1911 were rebuffed, in large part because of black opposition. Blacks comprised 20% of the electorate and immigrants comprised 15%, and the legislature had difficulty devising requirements against blacks that did not also disadvantage immigrants. The Progressive Era also brought reforms in working conditions for Maryland's labor force. In 1902, the state regulated conditions in mines; outlawed child laborers under the age of 12; mandated compulsory school attendance; and enacted the nation's first workers' compensation law. The workers' compensation law was overturned in the courts, but was redrafted and finally enacted in 1910. The Great Baltimore Fire of 1904 burned for more than 30 hours, destroying 1,526 buildings and spanning 70 city blocks. More than 1,231 firefighters worked to bring the blaze under control. With the nation's entry into World War I in 1917, new military bases such as Camp Meade, the Aberdeen Proving Ground, and the Edgewood Arsenal were established. Existing facilities, including Fort McHenry, were greatly expanded. After Georgia congressman William D. Upshaw criticized Maryland openly in 1923 for not passing Prohibition laws, Baltimore Sun editor Hamilton Owens coined the "Free State" nickname for Maryland in that context, which was popularized by H. L. Mencken in a series of newspaper editorials. Maryland's urban and rural communities had different experiences during the Great Depression. The "Bonus Army" marched through the state in 1932 on its way to Washington, D.C. Maryland instituted its first income tax in 1937 to generate revenue for schools and welfare. Passenger and freight steamboat service, once important throughout Chesapeake Bay and its many tributary rivers, ended in 1962. Baltimore was a major war production center during World War II. The biggest operations were Bethlehem Steel's Fairfield Yard, which built Liberty ships; and Glenn Martin, an aircraft manufacturer. 1950–present Maryland experienced population growth following World War II. Beginning in the 1960s, as suburban growth took hold around Washington, D.C. and Baltimore, the state began to take on a more mid-Atlantic culture as opposed to the traditionally Southern and Tidewater culture that previously dominated most of the state. Agricultural tracts gave way to residential communities, some of them carefully planned such as Columbia, St. Charles, and Montgomery Village. Concurrently the Interstate Highway System was built throughout the state, most notably I-95, I-695, and the Capital Beltway, altering travel patterns. In 1952, the eastern and western halves of Maryland were linked for the first time by the Chesapeake Bay Bridge, which replaced a nearby ferry service. Maryland's regions experienced economic changes following WWII. Heavy manufacturing declined in Baltimore. In Maryland's four westernmost counties, industrial, railroad, and coal mining jobs declined. On the lower Eastern Shore, family farms were bought up by major concerns and large-scale poultry farms and vegetable farming became prevalent. In Southern Maryland, tobacco farming nearly vanished due to suburban development and a state tobacco buy-out program in the 1990s. In an effort to reverse depopulation due to the loss of working-class industries, Baltimore initiated urban renewal projects in the 1960s with Charles Center and the Baltimore World Trade Center. Some resulted in the break-up of intact residential neighborhoods, producing social volatility, and some older residential areas around the harbor have had units renovated and have become popular with new populations. Geography Maryland has an area of and is comparable in overall area with Belgium []. It is the 42nd largest and 9th smallest state and is closest in size to the state of Hawaii [], the next smallest state. The next larger state, its neighbor West Virginia, is almost twice the size of Maryland []. Description Maryland possesses a variety of topography within its borders, contributing to its nickname America in Miniature. It ranges from sandy dunes dotted with seagrass in the east, to low marshlands teeming with wildlife and large bald cypress near the Chesapeake Bay, to gently rolling hills of oak forests in the Piedmont Region, and pine groves in the Maryland mountains to the west. Maryland is bounded on its north by Pennsylvania, on its north and east by Delaware, on its east by the Atlantic Ocean, and on its south and west, across the Potomac River, by West Virginia and Virginia. The mid-portion of this latter border is interrupted by the District of Columbia, which sits on land that was originally part of Montgomery and Prince George's counties and including the town of Georgetown, Maryland. This land was ceded to the United States Federal Government in 1790 to form the District of Columbia. (The Commonwealth of Virginia gave land south of the Potomac, including the town of Alexandria, Virginia; however, Virginia retroceded its portion in 1846). The Chesapeake Bay nearly bisects the state and the counties east of the bay are known collectively as the Eastern Shore. Most of the state's waterways are part of the Chesapeake Bay watershed, with the exceptions of a tiny portion of extreme western Garrett County (drained by the Youghiogheny River as part of the watershed of the Mississippi River), the eastern half of Worcester County (which drains into Maryland's Atlantic coastal bays), and a small portion of the state's northeast corner (which drains into the Delaware River watershed). So prominent is the Chesapeake in Maryland's geography and economic life that there has been periodic agitation to change the state's official nickname to the "Bay State", a nickname that has been used by Massachusetts for decades. The highest point in Maryland, with an elevation of , is Hoye Crest on Backbone Mountain, in the southwest corner of Garrett County, near the border with West Virginia, and near the headwaters of the North Branch of the Potomac River. Close to the small town of Hancock, in western Maryland, about two-thirds of the way across the state, less than separates its borders, the Mason–Dixon line to the north, and the northwards-arching Potomac River to the south. Portions of Maryland are included in various official and unofficial geographic regions. For example, the Delmarva Peninsula is composed of the Eastern Shore counties of Maryland, the entire state of Delaware, and the two counties that make up the Eastern Shore of Virginia, whereas the westernmost counties of Maryland are considered part of Appalachia. Much of the Baltimore–Washington corridor lies just south of the Piedmont in the Coastal Plain, though it straddles the border between the two regions. Geology Earthquakes in Maryland are infrequent and small due to the state's distance from seismic/earthquake zones. The M5.8 Virginia earthquake in 2011 was felt moderately throughout Maryland. Buildings in the state are not well-designed for earthquakes and can suffer damage easily. Maryland has no natural lakes, mostly due to the lack of glacial history in the area. All lakes in the state today were constructed, mostly via dams. Buckel's Bog is believed by geologists to have been a remnant of a former natural lake. Maryland has shale formations containing natural gas, where fracking is theoretically possible. Flora As is typical of states on the East Coast, Maryland's plant life is abundant and healthy. A modest volume of annual precipitation helps to support many types of plants, including seagrass and various reeds at the smaller end of the spectrum to the gigantic Wye Oak, a huge example of white oak, the state tree, which can grow over tall. Middle Atlantic coastal forests, typical of the southeastern Atlantic coastal plain, grow around Chesapeake Bay and on the Delmarva Peninsula. Moving west, a mixture of Northeastern coastal forests and Southeastern mixed forests cover the central part of the state. The Appalachian Mountains of western Maryland are home to Appalachian-Blue Ridge forests. These give way to Appalachian mixed mesophytic forests near the West Virginia border. Many foreign species are cultivated in the state, some as ornamentals, others as novelty species. Included among these are the crape myrtle, Italian cypress, southern magnolia, live oak in the warmer parts of the state, and even hardy palm trees in the warmer central and eastern parts of the state. USDA plant hardiness zones in the state range from Zones 5and6 in the extreme western part of the state to Zone7 in the central part, and Zone8 around the southern part of the coast, the bay area, and parts of metropolitan Baltimore. Invasive plant species, such as kudzu, tree of heaven, multiflora rose, and Japanese stiltgrass, stifle growth of endemic plant life. Maryland's state flower, the black-eyed susan, grows in abundance in wild flower groups throughout the state. Fauna The state harbors a considerable number of white-tailed deer, especially in the woody and mountainous west of the state, and overpopulation can become a problem. Mammals can be found ranging from the mountains in the west to the central areas and include black bears, bobcats, foxes, coyotes, raccoons, and otters. There is a population of rare wild (feral) horses found on Assateague Island. They are believed to be descended from horses who escaped from Spanish galleon shipwrecks. Every year during the last week of July, they are captured and swim across a shallow bay for sale at Chincoteague, Virginia, a conservation technique which ensures the tiny island is not overrun by the horses. The ponies and their sale were popularized by the children's book, Misty of Chincoteague. The purebred Chesapeake Bay Retriever dog was bred specifically for water sports, hunting and search and rescue in the Chesapeake area. In 1878, the Chesapeake Bay Retriever was the first individual retriever breed recognized by the American Kennel Club. and was later adopted by the University of Maryland, Baltimore County as their mascot. Maryland's reptile and amphibian population includes the diamondback terrapin turtle, which was adopted as the mascot of University of Maryland, College Park, as well as the threatened Eastern box turtle. The state is part of the territory of the Baltimore oriole, which is the official state bird and mascot of the MLB team the Baltimore Orioles. Aside from the oriole, 435 other species of birds have been reported from Maryland. The state insect is the Baltimore checkerspot butterfly, although it is not as common in Maryland as it is in the southern edge of its range. Environment Maryland joined with neighboring states during the end of the 20th century to improve the health of the Chesapeake Bay. The bay's aquatic life and seafood industry have been threatened by development and by fertilizer and livestock waste entering the bay. In 2007, Forbes.com rated Maryland as the fifth "Greenest" state in the country, behind three of the Pacific States and Vermont. Maryland ranks 40th in total energy consumption nationwide, and it managed less toxic waste per capita than all but six states in 2005. In April 2007, Maryland joined the Regional Greenhouse Gas Initiative (RGGI) — a regional initiative, formed by all the Northeastern states, Washington, D.C., and three Canadian provinces, to reduce greenhouse gas emissions. In March 2017, Maryland became the first state with proven gas reserves to ban fracking by passing a law against it. Vermont has such a law, but no shale gas, and New York has such a ban, though it was made by executive order. Climate Maryland has a wide array of climates, due to local variances in elevation, proximity to water, and protection from colder weather due to downslope winds. The eastern half of Maryland — which includes the cities of Ocean City, Salisbury, Annapolis, and the southern and eastern suburbs of Washington, D.C., and Baltimore — lies on the Atlantic Coastal Plain, with flat topography and sandy or muddy soil. This region has a humid subtropical climate (Köppen Cfa), with hot, humid summers and a short, mild-to-cool winter; it falls under USDA Hardiness zone 8a. The Piedmont region — which includes northern and western greater Baltimore, Westminster, Gaithersburg, Frederick, and Hagerstown — has average seasonal snowfall totals generally exceeding , and, as part of USDA Hardiness zones 7b and 7a, temperatures below are less rare. From the Cumberland Valley on westward, the climate begins to transition to a humid continental climate (Köppen Dfa). In western Maryland, the higher elevations of Allegany and Garrett counties—including the cities of Cumberland, Frostburg, and Oakland—display more characteristics of the humid continental zone, due in part to elevation. They fall under USDA Hardiness zones 6b and below. Precipitation in the state is characteristic of the East Coast. Annual rainfall ranges from with more in higher elevations. Nearly every part of Maryland receives per month of rain. Average annual snowfall varies from in the coastal areas to over in the western mountains of the state. Because of its location near the Atlantic Coast, Maryland is somewhat vulnerable to tropical cyclones, although the Delmarva Peninsula and the outer banks of North Carolina provide a large buffer, such that strikes from major hurricanes (category3 or above) occur infrequently. More often, Maryland gets the remnants of a tropical system that has already come ashore and released most of its energy. Maryland averages around 30–40 days of thunderstorms a year, and averages around six tornado strikes annually. Demographics In the 2020 United States census, the United States Census Bureau found that population of Maryland was 6,185,278 people, a 7.1% increase from the 2010 United States census. The United States Census Bureau estimated that the population of Maryland was 6,045,680 on July 1, 2019, a 4.71% increase from the 2010 United States census and an increase of 2,962, from the prior year. This includes a natural increase since the last census of 269,166 (464,251 births minus 275,093 deaths) and an increase due to net migration of 116,713 people into the state. Immigration from outside the United States resulted in a net increase of 129,730 people, and migration within the country produced a net loss of 13,017 people. The center of population of Maryland is located on the county line between Anne Arundel County and Howard County, in the unincorporated community of Jessup. Maryland's history as a border state has led it to exhibit characteristics of both the Northern and the Southern regions of the United States. Generally, rural Western Maryland between the West Virginian Panhandle and Pennsylvania has an Appalachian culture; the Southern and Eastern Shore regions of Maryland embody a Southern culture, while densely populated Central Maryland — radiating outward from Baltimore and Washington, D.C. — has more in common with that of the Northeast. The U.S. Census Bureau designates Maryland as one of the South Atlantic States, but it is commonly associated with the Mid-Atlantic States and Northeastern United States by other federal agencies, the media, and some residents. Birth data As of 2011, 58.0 percent of Maryland's population younger than age1 were minority background. Note: Births in table don't add up because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number. Since 2016, data for births of White Hispanic origin are not collected, but included in one Hispanic group; persons of Hispanic origin may be of any race. Language Spanish (including Spanish Creole) is the second most spoken language in Maryland, after English. The third and fourth most spoken languages are French (including Patois and Cajun) and Chinese. Other commonly spoken languages include various African languages, Korean, German, Tagalog, Russian, Vietnamese, Italian, various Asian languages, Persian, Hindi, and other Indic languages, Greek, and Arabic. Cities and metro areas Most of the population of Maryland lives in the central region of the state, in the Baltimore metropolitan area and Washington metropolitan area, both of which are part of the Baltimore–Washington metropolitan area. The majority of Maryland's population is concentrated in the cities and suburbs surrounding Washington, D.C., as well as in and around Maryland's most populous city, Baltimore. Historically, these and many other Maryland cities developed along the Fall Line, the line along which rivers, brooks, and streams are interrupted by rapids and waterfalls. Maryland's capital city, Annapolis, is one exception to this pattern since it lies along the banks of the Severn River, close to where it empties into the Chesapeake Bay. The Eastern Shore is less populous and more rural, as are the counties of western Maryland. The two westernmost counties of Maryland, Allegany and Garrett, are mountainous and sparsely populated, resembling West Virginia and Appalachia more than they do the rest of the state. Both eastern and western Maryland are, however, dotted with cities of regional importance, such as Ocean City, Princess Anne, and Salisbury on the Eastern Shore and Cumberland, Frostburg, and Hancock in Western Maryland. Southern Maryland is still somewhat rural, but suburbanization from Washington, D.C., has encroached significantly since the 1960s; important local population centers include Lexington Park, Prince Frederick, California, and Waldorf. Ancestry In 1970, the U.S. Census Bureau reported Maryland's population as 17.8 percent African-American and 80.4 percent non-Hispanic White. African Americans form a sizable portion of the state's population, 31.1% as of 2020. Most are descendants of people transported to the area as slaves from West Africa, and many are of mixed race, including European and Native American ancestry. Concentrations of African Americans live in Baltimore City, Prince George's County, a suburb of Washington, D.C., where many work; Charles County, western parts of Baltimore County, and the southern Eastern Shore. New residents of African descent include 20th-century and later immigrants from Nigeria, particularly of the Igbo and Yoruba tribes. Maryland also hosts populations from other African and Caribbean nations. Many immigrants from the Horn of Africa have settled in Maryland, with large communities existing in the suburbs of Washington, D.C. (particularly Montgomery County and Prince George's County), and the city of Baltimore. The Greater Washington area has the largest population of Ethiopians outside of Africa. The Ethiopian community of Greater D.C. was historically based in Washington, D.C.'s, Adams Morgan and Shaw neighborhoods, but as the community has grown, many Ethiopians have settled in Silver Spring. The Washington, D.C., metropolitan area is also home to large Eritrean and Somali communities. The top reported ancestries by Maryland residents are: German (15%), Irish (11%), English (8%), American (7%), Italian (6%), and Polish (3%). Irish American populations can be found throughout the Baltimore area, and the Northern and Eastern suburbs of Washington, D.C., in Maryland (descendants of those who moved out to the suburbs of Washington's once predominantly Irish neighborhoods), as well as Western Maryland, where Irish immigrant laborers helped to build the B & O Railroad. Smaller but much older Irish populations can be found in Southern Maryland, with some roots dating as far back as the early Maryland colony. This population, however, still remains culturally very active and yearly festivals are held. A large percentage of the population of the Eastern Shore and Southern Maryland are descendants of British American ancestry. The Eastern Shore was settled by Protestants, chiefly Methodist and the southern counties were initially settled by English Catholics. Western and northern Maryland have large German-American populations. More recent European immigrants of the late 19th and early 20th century settled first in Baltimore, attracted to its industrial jobs. Many of their ethnic Italian, Polish, Czech, Lithuanian, and Greek descendants still live in the area. Large ethnic minorities include Eastern Europeans such as Croatians, Belarusians, Russians and Ukrainians. The shares of European immigrants born in Eastern Europe increased significantly between 1990 and 2010. Following the dissolution of the Soviet Union, Yugoslavia, and Czechoslovakia, many immigrants from Eastern Europe came to the United States—12 percent of whom currently reside in Maryland. Hispanic immigrants of the later 20th century have settled in Aspen Hill, Hyattsville/Langley Park, Glenmont/Wheaton, Bladensburg, Riverdale Park, Gaithersburg, as well as Highlandtown and Greektown in East Baltimore. Salvadorans are the largest Hispanic group in Maryland. Other Hispanic groups with significant populations in the state include Mexicans and Puerto Ricans and Hondurans. Though the Salvadoran population is more concentrated in the area around Washington, D.C., and the Puerto Rican population is more concentrated in the Baltimore area, all other major Hispanic groups in the state are evenly dispersed between these two areas. Maryland has one of the most diverse Hispanic populations in the country, with significant populations from various Caribbean and Central American nations. Asian Americans are concentrated in the suburban counties surrounding Washington, D.C., and in Howard County, with Korean American and Taiwanese American communities in Rockville, Gaithersburg, and Germantown and a Filipino American community in Fort Washington. Numerous Indian Americans live across the state, especially in central Maryland. Attracting educated Asians and Africans to the professional jobs in the region, Maryland has the fifth-largest proportions of racial minorities in the country. In 2006, 645,744 were counted as foreign born, which represents mainly people from Latin America and Asia. About four percent are undocumented immigrants. Maryland also has a large Korean American population. In fact, 1.7 percent are Korean, while as a whole, 6.7 percent are Asian. According to The Williams Institute's analysis of the 2010 U.S. census, 12,538 same-sex couples are living in Maryland, representing 5.8 same-sex couples per 1,000 households. In 2019, non-Hispanic white Americans were 49.8% of Maryland's population (White Americans, including White Hispanics, were 57.3%), which made Maryland a majority minority state. 50.2% of Maryland's population is non-white, or is Hispanic or Latino, the highest percentage of any state on the East Coast, and the highest percentage after the majority-minority states of Hawaii, New Mexico, Texas, California, and Nevada. By 2031, minorities are projected to become the majority of voting eligible residents of Maryland. Religion Maryland has been historically prominent to American Catholic tradition because the English colony of Maryland was intended by George Calvert as a haven for English Catholics. Baltimore was the seat of the first Catholic bishop in the U.S. (1789), and Emmitsburg was the home and burial place of the first American-born citizen to be canonized, St. Elizabeth Ann Seton. Georgetown University, the first Catholic University, was founded in 1789 in what was then part of Maryland. The Basilica of the National Shrine of the Assumption of the Virgin Mary in Baltimore was the first Roman Catholic cathedral built in the United States, and the Archbishop of Baltimore is, albeit without formal primacy, the United States' quasi-primate, and often a cardinal. Among the immigrants of the 19th and 20th centuries from eastern and southern Europe were many Catholics. Despite its historic relevance to the Catholic Church in the United States, the percentage of Catholics in the state of Maryland is below the national average of 20%. Demographically, both Protestants and those identifying with no religion are more numerous than Catholics. According to the Pew Research Center, 69 percent of Maryland's population identifies themselves as Christian. Nearly 52% of the adult population are Protestants. Following Protestantism, Catholicism is the second largest religious affiliation, comprising 15% percent of the population. Amish/Mennonite communities are found in St. Mary's, Garrett, and Cecil counties. Judaism is the largest non-Christian religion in Maryland, with 241,000 adherents, or four percent of the total population. Jews are numerous throughout Montgomery County and in Pikesville and Owings Mills northwest of Baltimore. An estimated 81,500 Jewish Americans live in Montgomery County, constituting approximately 10% of the total population. The Seventh-day Adventist Church's world headquarters and Ahmadiyya Muslims' national headquarters are located in Silver Spring, just outside the District of Columbia. Economy The Bureau of Economic Analysis estimates that Maryland's gross state product in 2016 was $382.4billion. However, Maryland has been using Genuine Progress Indicator, an indicator of well-being, to guide the state's development, rather than relying only on growth indicators like GDP. According to the U.S. Census Bureau, Maryland households are currently the wealthiest in the country, with a 2013 median household income of $72,483 which puts it ahead of New Jersey and Connecticut, which are second and third respectively. Two of Maryland's counties, Howard and Montgomery, are the second and eleventh wealthiest counties in the nation respectively. Maryland has the most millionaires per capita in 2013, with a ratio of 7.7 percent. Also, the state's poverty rate of 7.8 percent is the lowest in the country. per capita personal income in 2006 was $43,500, fifth in the nation. As of February 2018, the state's unemployment rate was 4.2 percent. Maryland's economy benefits from the state's proximity to the federal government in Washington, D.C., with an emphasis on technical and administrative tasks for the defense/aerospace industry and bio-research laboratories, as well as staffing of satellite government headquarters in the suburban or exurban Baltimore/Washington area. Ft. Meade serves as the headquarters of the Defense Information Systems Agency, United States Cyber Command, and the National Security Agency/Central Security Service. In addition, a number of educational and medical research institutions are located in the state. In fact, the various components of The Johns Hopkins University and its medical research facilities are now the largest single employer in the Baltimore area. Altogether, white collar technical and administrative workers comprise 25 percent of Maryland's labor force, attributable in part to nearby Maryland being a part of the Washington Metro Area where the federal government office employment is relatively high. Manufacturing, while large in dollar value, is highly diversified with no sub-sector contributing over 20 percent of the total. Typical forms of manufacturing include electronics, computer equipment, and chemicals. The once-mighty primary metals sub-sector, which once included what was then the largest steel factory in the world at Sparrows Point, still exists, but is pressed with foreign competition, bankruptcies, and mergers. During World War II the Glenn Martin Company (now part of Lockheed Martin) airplane factory employed some 40,000 people. Mining other than construction materials is virtually limited to coal, which is located in the mountainous western part of the state. The brownstone quarries in the east, which gave Baltimore and Washington much of their characteristic architecture in the mid-19th century, were once a predominant natural resource. Historically, there used to be small gold-mining operations in Maryland, some near Washington, but these no longer exist. Baltimore port One major service activity is transportation, centered on the Port of Baltimore and its related rail and trucking access. The port ranked 17th in the U.S. by tonnage in 2008. Although the port handles a wide variety of products, the most typical imports are raw materials and bulk commodities, such as iron ore, petroleum, sugar, and fertilizers, often distributed to the relatively close manufacturing centers of the inland Midwest via good overland transportation. The port also receives several brands of imported motor vehicles and is the number one auto port in the U.S. Baltimore City is among the top 15 largest ports in the nation, and was one of six major U.S. ports that were part of the February 2006 controversy over the Dubai Ports World deal. The state as a whole is heavily industrialized, with a booming economy and influential technology centers. Its computer industries are some of the most sophisticated in the United States, and the federal government has invested heavily in the area. Maryland is home to several large military bases and scores of high-level government jobs. The Chesapeake and Delaware Canal is a canal on the Eastern Shore that connects the waters of the Delaware River with those of the Chesapeake Bay, and in particular with the Port of Baltimore, carrying 40 percent of the port's ship traffic. Agriculture and fishing Maryland has a large food-production sector. A large component of this is commercial fishing, centered in the Chesapeake Bay, but also including activity off the short Atlantic seacoast. The largest catches by species are the blue crab, oysters, striped bass, and menhaden. The Bay also has overwintering waterfowl in its wildlife refuges. |
in the United States. The Grand Rapids metropolitan area in Western Michigan is the state's fastest-growing metro area, with more than 1.3 million residents . Metro Detroit receives more than 15 million visitors each year. Michigan has many popular tourist destinations, including areas such as Frankenmuth in The Thumb, and Traverse City on the Grand Traverse Bay in Northern Michigan. Tourists spend about $17 billion annually in Michigan supporting 193,000 jobs. Michigan typically ranks third or fourth in overall Research & development (R&D) expenditures in the US. The state's leading research institutions include the University of Michigan, Michigan State University, and Wayne State University, which are important partners in the state's economy and the state's University Research Corridor. Michigan's public universities attract more than $1.5 B in research and development grants each year. Agriculture also serves a significant role, making the state a leading grower of fruit in the US, including blueberries, cherries, apples, grapes, and peaches. Government State government Michigan is governed as a republic, with three branches of government: the executive branch consisting of the Governor of Michigan and the other independently elected constitutional officers; the legislative branch consisting of the House of Representatives and Senate; and the judicial branch. The Michigan Constitution allows for the direct participation of the electorate by statutory initiative and referendum, recall, and constitutional initiative and referral (Article II, § 9, defined as "the power to propose laws and to enact and reject laws, called the initiative, and the power to approve or reject laws enacted by the legislature, called the referendum. The power of initiative extends only to laws which the legislature may enact under this constitution"). Lansing is the state capital and is home to all three branches of state government. The governor and the other state constitutional officers serve four-year terms and may be re-elected only once. The current governor is Gretchen Whitmer. Michigan has two official Governor's Residences; one is in Lansing, and the other is at Mackinac Island. The other constitutionally elected executive officers are the lieutenant governor, who is elected on a joint ticket with the governor, the secretary of state, and the attorney general. The lieutenant governor presides over the Senate (voting only in case of a tie) and is also a member of the cabinet. The secretary of state is the chief elections officer and is charged with running many licensure programs including motor vehicles, all of which are done through the branch offices of the secretary of state. The Michigan Legislature consists of a 38-member Senate and 110-member House of Representatives. Members of both houses of the legislature are elected through first past the post elections by single-member electoral districts of near-equal population that often have boundaries which coincide with county and municipal lines. Senators serve four-year terms concurrent to those of the governor, while representatives serve two-year terms. The Michigan State Capitol was dedicated in 1879 and has hosted the executive and legislative branches of the state ever since. The Michigan judiciary consists of two courts with primary jurisdiction (the Circuit Courts and the District Courts), one intermediate level appellate court (the Michigan Court of Appeals), and the Michigan Supreme Court. There are several administrative courts and specialized courts. District courts are trial courts of limited jurisdiction, handling most traffic violations, small claims, misdemeanors, and civil suits where the amount contended is below $25,000. District courts are often responsible for handling the preliminary examination and for setting bail in felony cases. District court judges are elected to terms of six years. In a few locations, municipal courts have been retained to the exclusion of the establishment of district courts. There are 57 circuit courts in the State of Michigan, which have original jurisdiction over all civil suits where the amount contended in the case exceeds $25,000 and all criminal cases involving felonies. Circuit courts are also the only trial courts in the State of Michigan which possess the power to issue equitable remedies. Circuit courts have appellate jurisdiction from district and municipal courts, as well as from decisions and decrees of state agencies. Most counties have their own circuit court, but sparsely populated counties often share them. Circuit court judges are elected to terms of six years. State appellate court judges are elected to terms of six years, but vacancies are filled by an appointment by the governor. There are four divisions of the Court of Appeals in Detroit, Grand Rapids, Lansing, and Marquette. Cases are heard by the Court of Appeals by panels of three judges, who examine the application of the law and not the facts of the case unless there has been grievous error pertaining to questions of fact. The Michigan Supreme Court consists of seven members who are elected on non-partisan ballots for staggered eight-year terms. The Supreme Court has original jurisdiction only in narrow circumstances but holds appellate jurisdiction over the entire state judicial system. Law Michigan has had four constitutions, the first of which was ratified on October5 and 6, 1835. There were also constitutions from 1850 and 1908, in addition to the current constitution from 1963. The current document has a preamble, 11 articles, and one section consisting of a schedule and temporary provisions. Michigan, like every U.S. state except Louisiana, has a common law legal system. Politics Having been a Democratic-leaning state at the presidential level since the 1990s, Michigan has evolved into a swing state after Donald Trump won the state in 2016. Governors since the 1970s have alternated between the Democrats and Republicans, and statewide offices including attorney general, secretary of state, and senator have been held by members of both parties in varying proportion. The Republican Party holds a majority in both the House and Senate of the Michigan Legislature. The state's congressional delegation is commonly split, with one party or the other typically holding a narrow majority. Michigan was the home of Gerald Ford, the 38th president of the United States. Born in Nebraska, he moved as an infant to Grand Rapids. The Gerald R. Ford Museum is in Grand Rapids, and the Gerald R. Ford Presidential Library is on the campus of his alma mater, the University of Michigan in Ann Arbor. In a 2020 study, Michigan was ranked as the 13th easiest state for citizens to vote in. Administrative divisions State government is decentralized among three tiers—statewide, county and township. Counties are administrative divisions of the state, and townships are administrative divisions of a county. Both of them exercise state government authority, localized to meet the particular needs of their jurisdictions, as provided by state law. There are 83 counties in Michigan. Cities, state universities, and villages are vested with home rule powers of varying degrees. Home rule cities can generally do anything not prohibited by law. The fifteen state universities have broad power and can do anything within the parameters of their status as educational institutions that is not prohibited by the state constitution. Villages, by contrast, have limited home rule and are not completely autonomous from the county and township in which they are located. There are two types of township in Michigan: general law township and charter. Charter township status was created by the Legislature in 1947 and grants additional powers and stream-lined administration in order to provide greater protection against annexation by a city. , there were 127 charter townships in Michigan. In general, charter townships have many of the same powers as a city but without the same level of obligations. For example, a charter township can have its own fire department, water and sewer department, police department, and so on—just like a city—but it is not required to have those things, whereas cities must provide those services. Charter townships can opt to use county-wide services instead, such as deputies from the county sheriff's office instead of a home-based force of ordinance officers. Geography Michigan consists of two peninsulas separated by the Straits of Mackinac. The 45th parallel north runs through the state, marked by highway signs and the Polar-Equator Trail—along a line including Mission Point Light near Traverse City, the towns of Gaylord and Alpena in the Lower Peninsula and Menominee in the Upper Peninsula. With the exception of two tiny areas drained by the Mississippi River by way of the Wisconsin River in the Upper Peninsula and by way of the Kankakee-Illinois River in the Lower Peninsula, Michigan is drained by the Great Lakes-St. Lawrence watershed and is the only state with the majority of its land thus drained. No point in the state is more than from a natural water source or more than from a Great Lakes shoreline. The Great Lakes that border Michigan from east to west are Lake Erie, Lake Huron, Lake Michigan and Lake Superior. The state is bounded on the south by the states of Ohio and Indiana, sharing land and water boundaries with both. Michigan's western boundaries are almost entirely water boundaries, from south to north, with Illinois and Wisconsin in Lake Michigan; then a land boundary with Wisconsin and the Upper Peninsula, that is principally demarcated by the Menominee and Montreal Rivers; then water boundaries again, in Lake Superior, with Wisconsin and Minnesota to the west, capped around by the Canadian province of Ontario to the north and east. The heavily forested Upper Peninsula is relatively mountainous in the west. The Porcupine Mountains, which are part of one of the oldest mountain chains in the world, rise to an altitude of almost above sea level and form the watershed between the streams flowing into Lake Superior and Lake Michigan. The surface on either side of this range is rugged. The state's highest point, in the Huron Mountains northwest of Marquette, is Mount Arvon at . The peninsula is as large as Connecticut, Delaware, Massachusetts, and Rhode Island combined but has fewer than 330,000 inhabitants. They are sometimes called "Yoopers" (from "U.P.'ers"), and their speech (the "Yooper dialect") has been heavily influenced by the numerous Scandinavian and Canadian immigrants who settled the area during the lumbering and mining boom of the late 19th century. The Lower Peninsula is shaped like a mitten and many residents hold up a hand to depict where they are from. It is long from north to south and from east to west and occupies nearly two-thirds of the state's land area. The surface of the peninsula is generally level, broken by conical hills and glacial moraines usually not more than a few hundred feet tall. It is divided by a low water divide running north and south. The larger portion of the state is on the west of this and gradually slopes toward Lake Michigan. The highest point in the Lower Peninsula is either Briar Hill at , or one of several points nearby in the vicinity of Cadillac. The lowest point is the surface of Lake Erie at . The geographic orientation of Michigan's peninsulas makes for a long distance between the ends of the state. Ironwood, in the far western Upper Peninsula, lies by highway from Lambertville in the Lower Peninsula's southeastern corner. The geographic isolation of the Upper Peninsula from Michigan's political and population centers makes the region culturally and economically distinct. Frequent attempts to establish the Upper Peninsula as its own state called "Superior" have failed to gain traction. A feature of Michigan that gives it the distinct shape of a mitten is the Thumb. This peninsula projects out into Lake Huron and the Saginaw Bay. The geography of the Thumb is mainly flat with a few rolling hills. Other peninsulas of Michigan include the Keweenaw Peninsula, making up the Copper Country region of the state. The Leelanau Peninsula lies in the Northern Lower Michigan region. See Also Michigan Regions Numerous lakes and marshes mark both peninsulas, and the coast is much indented. Keweenaw Bay, Whitefish Bay, and the Big and Little Bays De Noc are the principal indentations on the Upper Peninsula. The Grand and Little Traverse, Thunder, and Saginaw bays indent the Lower Peninsula. Michigan has the second longest shoreline of any state—, including of island shoreline. The state has numerous large islands, the principal ones being the North Manitou and South Manitou, Beaver, and Fox groups in Lake Michigan; Isle Royale and Grande Isle in Lake Superior; Marquette, Bois Blanc, and Mackinac islands in Lake Huron; and Neebish, Sugar, and Drummond islands in St. Mary's River. Michigan has about 150 lighthouses, the most of any U.S. state. The first lighthouses in Michigan were built between 1818 and 1822. They were built to project light at night and to serve as a landmark during the day to safely guide the passenger ships and freighters traveling the Great Lakes. See Lighthouses in the United States. The state's rivers are generally small, short and shallow, and few are navigable. The principal ones include the Detroit River, St. Marys River, and St. Clair River which connect the Great Lakes; the Au Sable, Cheboygan, and Saginaw, which flow into Lake Huron; the Ontonagon, and Tahquamenon, which flow into Lake Superior; and the St. Joseph, Kalamazoo, Grand, Muskegon, Manistee, and Escanaba, which flow into Lake Michigan. The state has 11,037 inland lakes—totaling of inland water—in addition to of Great Lakes waters. No point in Michigan is more than from an inland lake or more than from one of the Great Lakes. The state is home to several areas maintained by the National Park Service including: Isle Royale National Park, in Lake Superior, about southeast of Thunder Bay, Ontario. Other national protected areas in the state include: Keweenaw National Historical Park, Pictured Rocks National Lakeshore, Sleeping Bear Dunes National Lakeshore, Huron National Forest, Manistee National Forest, Hiawatha National Forest, Ottawa National Forest and Father Marquette National Memorial. The largest section of the North Country National Scenic Trail passes through Michigan. With 78 state parks, 19 state recreation areas, and six state forests, Michigan has the largest state park and state forest system of any state. Climate Michigan has a continental climate, although there are two distinct regions. The southern and central parts of the Lower Peninsula (south of Saginaw Bay and from the Grand Rapids area southward) have a warmer climate (Köppen climate classification Dfa) with hot summers and cold winters. The northern part of Lower Peninsula and the entire Upper Peninsula has a more severe climate (Köppen Dfb), with warm, but shorter summers and longer, cold to very cold winters. Some parts of the state average high temperatures below freezing from December through February, and into early March in the far northern parts. During the winter through the middle of February, the state is frequently subjected to heavy lake-effect snow. The state averages from of precipitation annually; however, some areas in the northern lower peninsula and the upper peninsula average almost of snowfall per year. Michigan's highest recorded temperature is at Mio on July 13, 1936, and the coldest recorded temperature is at Vanderbilt on February 9, 1934. The state averages 30 days of thunderstorm activity per year. These can be severe, especially in the southern part of the state. The state averages 17 tornadoes per year, which are more common in the state's extreme southern section. Portions of the southern border have been almost as vulnerable historically as states further west and in Tornado Alley. For this reason, many communities in the very southern portions of the state have tornado sirens to warn residents of approaching tornadoes. Farther north, in Central Michigan, Northern Michigan, and the Upper Peninsula, tornadoes are rare. Geology The geological formation of the state is greatly varied, with the Michigan Basin being the most major formation. Primary boulders are found over the entire surface of the Upper Peninsula (being principally of primitive origin), while Secondary deposits cover the entire Lower Peninsula. The Upper Peninsula exhibits Lower Silurian sandstones, limestones, copper and iron bearing rocks, corresponding to the Huronian system of Canada. The central portion of the Lower Peninsula contains coal measures and rocks of the Pennsylvanian period. Devonian and sub-Carboniferous deposits are scattered over the entire state. Michigan rarely experiences earthquakes, and those that it does experience are generally smaller ones that do not cause significant damage. | 50% of the population resides there) and the eleventh largest in the United States. The Grand Rapids metropolitan area in Western Michigan is the state's fastest-growing metro area, with more than 1.3 million residents . Metro Detroit receives more than 15 million visitors each year. Michigan has many popular tourist destinations, including areas such as Frankenmuth in The Thumb, and Traverse City on the Grand Traverse Bay in Northern Michigan. Tourists spend about $17 billion annually in Michigan supporting 193,000 jobs. Michigan typically ranks third or fourth in overall Research & development (R&D) expenditures in the US. The state's leading research institutions include the University of Michigan, Michigan State University, and Wayne State University, which are important partners in the state's economy and the state's University Research Corridor. Michigan's public universities attract more than $1.5 B in research and development grants each year. Agriculture also serves a significant role, making the state a leading grower of fruit in the US, including blueberries, cherries, apples, grapes, and peaches. Government State government Michigan is governed as a republic, with three branches of government: the executive branch consisting of the Governor of Michigan and the other independently elected constitutional officers; the legislative branch consisting of the House of Representatives and Senate; and the judicial branch. The Michigan Constitution allows for the direct participation of the electorate by statutory initiative and referendum, recall, and constitutional initiative and referral (Article II, § 9, defined as "the power to propose laws and to enact and reject laws, called the initiative, and the power to approve or reject laws enacted by the legislature, called the referendum. The power of initiative extends only to laws which the legislature may enact under this constitution"). Lansing is the state capital and is home to all three branches of state government. The governor and the other state constitutional officers serve four-year terms and may be re-elected only once. The current governor is Gretchen Whitmer. Michigan has two official Governor's Residences; one is in Lansing, and the other is at Mackinac Island. The other constitutionally elected executive officers are the lieutenant governor, who is elected on a joint ticket with the governor, the secretary of state, and the attorney general. The lieutenant governor presides over the Senate (voting only in case of a tie) and is also a member of the cabinet. The secretary of state is the chief elections officer and is charged with running many licensure programs including motor vehicles, all of which are done through the branch offices of the secretary of state. The Michigan Legislature consists of a 38-member Senate and 110-member House of Representatives. Members of both houses of the legislature are elected through first past the post elections by single-member electoral districts of near-equal population that often have boundaries which coincide with county and municipal lines. Senators serve four-year terms concurrent to those of the governor, while representatives serve two-year terms. The Michigan State Capitol was dedicated in 1879 and has hosted the executive and legislative branches of the state ever since. The Michigan judiciary consists of two courts with primary jurisdiction (the Circuit Courts and the District Courts), one intermediate level appellate court (the Michigan Court of Appeals), and the Michigan Supreme Court. There are several administrative courts and specialized courts. District courts are trial courts of limited jurisdiction, handling most traffic violations, small claims, misdemeanors, and civil suits where the amount contended is below $25,000. District courts are often responsible for handling the preliminary examination and for setting bail in felony cases. District court judges are elected to terms of six years. In a few locations, municipal courts have been retained to the exclusion of the establishment of district courts. There are 57 circuit courts in the State of Michigan, which have original jurisdiction over all civil suits where the amount contended in the case exceeds $25,000 and all criminal cases involving felonies. Circuit courts are also the only trial courts in the State of Michigan which possess the power to issue equitable remedies. Circuit courts have appellate jurisdiction from district and municipal courts, as well as from decisions and decrees of state agencies. Most counties have their own circuit court, but sparsely populated counties often share them. Circuit court judges are elected to terms of six years. State appellate court judges are elected to terms of six years, but vacancies are filled by an appointment by the governor. There are four divisions of the Court of Appeals in Detroit, Grand Rapids, Lansing, and Marquette. Cases are heard by the Court of Appeals by panels of three judges, who examine the application of the law and not the facts of the case unless there has been grievous error pertaining to questions of fact. The Michigan Supreme Court consists of seven members who are elected on non-partisan ballots for staggered eight-year terms. The Supreme Court has original jurisdiction only in narrow circumstances but holds appellate jurisdiction over the entire state judicial system. Law Michigan has had four constitutions, the first of which was ratified on October5 and 6, 1835. There were also constitutions from 1850 and 1908, in addition to the current constitution from 1963. The current document has a preamble, 11 articles, and one section consisting of a schedule and temporary provisions. Michigan, like every U.S. state except Louisiana, has a common law legal system. Politics Having been a Democratic-leaning state at the presidential level since the 1990s, Michigan has evolved into a swing state after Donald Trump won the state in 2016. Governors since the 1970s have alternated between the Democrats and Republicans, and statewide offices including attorney general, secretary of state, and senator have been held by members of both parties in varying proportion. The Republican Party holds a majority in both the House and Senate of the Michigan Legislature. The state's congressional delegation is commonly split, with one party or the other typically holding a narrow majority. Michigan was the home of Gerald Ford, the 38th president of the United States. Born in Nebraska, he moved as an infant to Grand Rapids. The Gerald R. Ford Museum is in Grand Rapids, and the Gerald R. Ford Presidential Library is on the campus of his alma mater, the University of Michigan in Ann Arbor. In a 2020 study, Michigan was ranked as the 13th easiest state for citizens to vote in. Administrative divisions State government is decentralized among three tiers—statewide, county and township. Counties are administrative divisions of the state, and townships are administrative divisions of a county. Both of them exercise state government authority, localized to meet the particular needs of their jurisdictions, as provided by state law. There are 83 counties in Michigan. Cities, state universities, and villages are vested with home rule powers of varying degrees. Home rule cities can generally do anything not prohibited by law. The fifteen state universities have broad power and can do anything within the parameters of their status as educational institutions that is not prohibited by the state constitution. Villages, by contrast, have limited home rule and are not completely autonomous from the county and township in which they are located. There are two types of township in Michigan: general law township and charter. Charter township status was created by the Legislature in 1947 and grants additional powers and stream-lined administration in order to provide greater protection against annexation by a city. , there were 127 charter townships in Michigan. In general, charter townships have many of the same powers as a city but without the same level of obligations. For example, a charter township can have its own fire department, water and sewer department, police department, and so on—just like a city—but it is not required to have those things, whereas cities must provide those services. Charter townships can opt to use county-wide services instead, such as deputies from the county sheriff's office instead of a home-based force of ordinance officers. Geography Michigan consists of two peninsulas separated by the Straits of Mackinac. The 45th parallel north runs through the state, marked by highway signs and the Polar-Equator Trail—along a line including Mission Point Light near Traverse City, the towns of Gaylord and Alpena in the Lower Peninsula and Menominee in the Upper Peninsula. With the exception of two tiny areas drained by the Mississippi River by way of the Wisconsin River in the Upper Peninsula and by way of the Kankakee-Illinois River in the Lower Peninsula, Michigan is drained by the Great Lakes-St. Lawrence watershed and is the only state with the majority of its land thus drained. No point in the state is more than from a natural water source or more than from a Great Lakes shoreline. The Great Lakes that border Michigan from east to west are Lake Erie, Lake Huron, Lake Michigan and Lake Superior. The state is bounded on the south by the states of Ohio and Indiana, sharing land and water boundaries with both. Michigan's western boundaries are almost entirely water boundaries, from south to north, with Illinois and Wisconsin in Lake Michigan; then a land boundary with Wisconsin and the Upper Peninsula, that is principally demarcated by the Menominee and Montreal Rivers; then water boundaries again, in Lake Superior, with Wisconsin and Minnesota to the west, capped around by the Canadian province of Ontario to the north and east. The heavily forested Upper Peninsula is relatively mountainous in the west. The Porcupine Mountains, which are part of one of the oldest mountain chains in the world, rise to an altitude of almost above sea level and form the watershed between the streams flowing into Lake Superior and Lake Michigan. The surface on either side of this range is rugged. The state's highest point, in the Huron Mountains northwest of Marquette, is Mount Arvon at . The peninsula is as large as Connecticut, Delaware, Massachusetts, and Rhode Island combined but has fewer than 330,000 inhabitants. They are sometimes called "Yoopers" (from "U.P.'ers"), and their speech (the "Yooper dialect") has been heavily influenced by the numerous Scandinavian and Canadian immigrants who settled the area during the lumbering and mining boom of the late 19th century. The Lower Peninsula is shaped like a mitten and many residents hold up a hand to depict where they are from. It is long from north to south and from east to west and occupies nearly two-thirds of the state's land area. The surface of the peninsula is generally level, broken by conical hills and glacial moraines usually not more than a few hundred feet tall. It is divided by a low water divide running north and south. The larger portion of the state is on the west of this and gradually slopes toward Lake Michigan. The highest point in the Lower Peninsula is either Briar Hill at , or one of several points nearby in the vicinity of Cadillac. The lowest point is the surface of Lake Erie at . The geographic orientation of Michigan's peninsulas makes for a long distance between the ends of the state. Ironwood, in the far western Upper Peninsula, lies by highway from Lambertville in the Lower Peninsula's southeastern corner. The geographic isolation of the Upper Peninsula from Michigan's political and population centers makes the region culturally and economically distinct. Frequent attempts to establish the Upper Peninsula as its own state called "Superior" have failed to gain traction. A feature of Michigan that gives it the distinct shape of a mitten is the Thumb. This peninsula projects out into Lake Huron and the Saginaw Bay. The geography of the Thumb is mainly flat with a few rolling hills. Other peninsulas of Michigan include the Keweenaw Peninsula, making up the Copper Country region of the state. The Leelanau Peninsula lies in the Northern Lower Michigan region. See Also Michigan Regions Numerous lakes and marshes mark both peninsulas, and the coast is much indented. Keweenaw Bay, Whitefish Bay, and the Big and Little Bays De Noc are the principal indentations on the Upper Peninsula. The Grand and Little Traverse, Thunder, and Saginaw bays indent the Lower Peninsula. Michigan has the second longest shoreline of any state—, including of island shoreline. The state has numerous large islands, the principal ones being the North Manitou and South Manitou, Beaver, and Fox groups in Lake Michigan; Isle Royale and Grande Isle in Lake Superior; Marquette, Bois Blanc, and Mackinac islands in Lake Huron; and Neebish, Sugar, and Drummond islands in St. Mary's River. Michigan has about 150 lighthouses, the most of any U.S. state. The first lighthouses in Michigan were built between 1818 and 1822. They were built to project light at night and to serve as a landmark during the day to safely guide the passenger ships and freighters traveling the Great Lakes. See Lighthouses in the United States. The state's rivers are generally small, short and shallow, and few are navigable. The principal ones include the Detroit River, St. Marys River, and St. Clair River which connect the Great Lakes; the Au Sable, Cheboygan, and Saginaw, which flow into Lake Huron; the Ontonagon, and Tahquamenon, which flow into Lake Superior; and the St. Joseph, Kalamazoo, Grand, Muskegon, Manistee, and Escanaba, which flow into Lake Michigan. The state has 11,037 inland lakes—totaling of inland water—in addition to of Great Lakes waters. No point in Michigan is more than from an inland lake or more than from one of the Great Lakes. The state is home to several areas maintained by the National Park Service including: Isle Royale National Park, in Lake Superior, about southeast of Thunder Bay, Ontario. Other national protected areas in the state include: Keweenaw National Historical Park, Pictured Rocks National Lakeshore, Sleeping Bear Dunes National Lakeshore, Huron National Forest, Manistee National Forest, Hiawatha National Forest, Ottawa National Forest and Father Marquette National Memorial. The largest section of the North Country National Scenic Trail passes through Michigan. With 78 state parks, 19 state recreation areas, and six state forests, Michigan has the largest state park and state forest system of any state. Climate Michigan has a continental climate, although there are two distinct regions. The southern and central parts of the Lower Peninsula (south of Saginaw Bay and from the Grand Rapids area southward) have a warmer climate (Köppen climate classification Dfa) with hot summers and cold winters. The northern part of Lower Peninsula and the entire Upper Peninsula has a more severe climate (Köppen Dfb), with warm, but shorter summers and longer, cold to very cold winters. Some parts of the state average high temperatures below freezing from December through February, and into early March in the far northern parts. During the winter through the middle of February, the state is frequently subjected to heavy lake-effect snow. The state averages from of precipitation annually; however, some areas in the northern lower peninsula and the upper peninsula average almost of snowfall per year. Michigan's highest recorded temperature is at Mio on July 13, 1936, and the coldest recorded temperature is at Vanderbilt on February 9, 1934. The state averages 30 days of thunderstorm activity per year. These can be severe, especially in the southern part of the state. The state averages 17 tornadoes per year, which are more common in the state's extreme southern section. Portions of the southern border have been almost as vulnerable historically as states further west and in Tornado Alley. For this reason, many communities in the very southern portions of the state have tornado sirens to warn residents of approaching tornadoes. Farther north, in Central Michigan, Northern Michigan, and the Upper Peninsula, tornadoes are rare. Geology The geological formation of the state is greatly varied, with the Michigan Basin being the most major formation. Primary boulders are found over the entire surface of the Upper Peninsula (being principally of primitive origin), while Secondary deposits cover the entire Lower Peninsula. The Upper Peninsula exhibits Lower Silurian sandstones, limestones, copper and iron bearing rocks, corresponding to the Huronian system of Canada. The central portion of the Lower Peninsula contains coal measures and rocks of the Pennsylvanian period. Devonian and sub-Carboniferous deposits are scattered over the entire state. Michigan rarely experiences earthquakes, and those that it does experience are generally smaller ones that do not cause significant damage. A 4.6-magnitude earthquake struck in August 1947. More recently, a 4.2-magnitude earthquake occurred on Saturday, May 2, 2015, shortly after noon, about five miles south of Galesburg, Michigan (9 miles southeast of Kalamazoo) in central Michigan, about 140 miles west of Detroit, according to the Colorado-based U.S. Geological Survey's National Earthquake Information Center. No major damage or injuries were reported, according to Governor Rick Snyder's office. Demographics Population The United States Census Bureau recorded the population of Michigan at 10,084,442 at the 2020 United States Census, an increase of 2.03% from 9,883,635 recorded at |
their original findings, across additional data sets. A 2010 study by three economists (Arindrajit Dube of the University of Massachusetts Amherst, William Lester of the University of North Carolina at Chapel Hill, and Michael Reich of the University of California, Berkeley), compared adjacent counties in different states where the minimum wage had been raised in one of the states. They analyzed employment trends for several categories of low-wage workers from 1990 to 2006 and found that increases in minimum wages had no negative effects on low-wage employment and successfully increased the income of workers in food services and retail employment, as well as the narrower category of workers in restaurants. However, a 2011 study by Baskaya and Rubinstein of Brown University found that at the federal level, "a rise in minimum wage have [sic] an instantaneous impact on wage rates and a corresponding negative impact on employment", stating, "Minimum wage increases boost teenage wage rates and reduce teenage employment." Another 2011 study by Sen, Rybczynski, and Van De Waal found that "a 10% increase in the minimum wage is significantly correlated with a 3–5% drop in teen employment." A 2012 study by Sabia, Hansen, and Burkhauser found that "minimum wage increases can have substantial adverse labor demand effects for low-skilled individuals", with the largest effects on those aged 16 to 24. A 2013 study by Meer and West concluded that "the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments ... most pronounced for younger workers and in industries with a higher proportion of low-wage workers." This study by Meer and West was later critiqued for its trends of assumption in the context of narrowly defined low-wage groups. The authors replied to the critiques and released additional data which addressed the criticism of their methodology, but did not resolve the issue of whether their data showed a causal relationship. A 2019 paper published in the Quarterly Journal of Economics by Cengiz, Dube, Lindner and Zipperer argues that the job losses found using a Meer and West type methodology "tend to be driven by an unrealistically large drop in the number of jobs at the upper tail of the wage distribution, which is unlikely to be a causal effect of the minimum wage." Another 2013 study by Suzana Laporšek of the University of Primorska, on youth unemployment in Europe claimed there was "a negative, statistically significant impact of minimum wage on youth employment." A 2013 study by labor economists Tony Fang and Carl Lin which studied minimum wages and employment in China, found that "minimum wage changes have significant adverse effects on employment in the Eastern and Central regions of China, and result in disemployment for females, young adults, and low-skilled workers". A 2017 study found that in Seattle, increasing the minimum wage to $13 per hour lowered income of low-wage workers by $125 per month, due to the resulting reduction in hours worked, as industries made changes to make their businesses less labor intensive. The authors argue that previous research that found no negative effects on hours worked are flawed because they only look at select industries, or only look at teenagers, instead of entire economies. Finally, a study by Overstreet in 2019 examined increases to the minimum wage in Arizona. Utilizing data spanning from 1976 to 2017, Overstreet found that a 1% increase in the minimum wage was significantly correlated with a 1.13% increase in per capita income in Arizona. This study could show that smaller increases in minimum wage may not distort labor market as significantly as larger increases experienced in other cities and states. Thus, the small increases experienced in Arizona may have actually led to a slight increase in economic growth. In 2019, economists from Georgia Tech published a study that found a strong correlation between increases to the minimum wage and detectable harm to the financial conditions of small businesses, including a higher rate of bankruptcy, lower hiring rates, lower credit scores, and higher interest payments. The researchers noted that these small businesses were also correlated with minority ownership and minority customer bases. In July 2019, the Congressional Budget Office published the impact on proposed national $15/hour legislation. It noted that workers who retained full employment would see a modest improvement in take home pay offset by a small decrease in working conditions and non-pecuniary benefits. However, this benefit is offset by three primary factors; the reduction in hours worked, the reduction in total employment, and the increased cost of goods and services. Those factors result in a decrease of about $33 Billion in total income and nearly 1.7–3.7 million lost jobs in the first three years (the CBO also noted this figure increases over time). In response to an April 2016 Council of Economic Advisers (CEA) report advocating the raising of the minimum wage to deter crime, economists used data from the 1998–2016 Uniform Crime Reports (UCR), National Incident-Based Reporting System (NIBRS), and National Longitudinal Study of Youth (NLSY) to assess the impact of the minimum wage on crime. They found that increasing the minimum wage resulted in increased property crime arrests among those ages 16-to-24. They estimated that an increase of the Federal minimum wage to $15/hour would "generate criminal externality costs of nearly $2.4 billion." Economists in Denmark, relying on a discontinuity in wage rates when a worker turns 18, found that employment fell by 33% and total hours fell by 45% when the minimum wage law was in effect. According to the 2021 study "The Effects of Minimum Wage on Employment: New Evidences for Spain" by the Bank of Spain, the sudden increase of minimum wage in Spain in 2019 by 22% (from 860 EUR/month, to 1050 EUR/month, projected to 12 annual payments) destroyed between 98,000 and 180,000 jobs, which corresponds to between 6% and 11% of jobs remunerated at minimum wage. A 2021 study "Reallocation Effects of the Minimum Wage" in the Quarterly Journal of Economics found that the introduction of a nationwide minimum wage in Germany (8.50 EUR/hour) caused an increase in wages without leading to a reduction in employment. However, authors found that the lack of employment responses masks some important structural shifts in the economy: the minimum wage led to a reallocation of workers from smaller to larger, from lower-paying to higher-paying and from less- to more-productive establishments. Some small businesses had to exit the market, thus leading to increment of market concentration and reduced competition among firms in the product market, which can lead to higher prices. The study also found that the reallocation of low-wage workers to higher-paying establishments came at the expense of increased commuting time, which might have left some workers worse off despite earning a higher wage. Statistical meta-analyses Several researchers have conducted statistical meta-analyses of the employment effects of the minimum wage. In 1995, Card and Krueger analyzed 14 earlier time-series studies on minimum wages and concluded that there was clear evidence of publication bias (in favor of studies that found a statistically significant negative employment effect). They point out that later studies, which had more data and lower standard errors, did not show the expected increase in t-statistic (almost all the studies had a t-statistic of about two, just above the level of statistical significance at the .05 level). Though a serious methodological indictment, opponents of the minimum wage largely ignored this issue; as Thomas Leonard noted, "The silence is fairly deafening." In 2005, T.D. Stanley showed that Card and Krueger's results could signify either publication bias or the absence of a minimum wage effect. However, using a different methodology, Stanley concluded that there is evidence of publication bias and that correction of this bias shows no relationship between the minimum wage and unemployment. In 2008, Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on disemployment effects and concluded that Card and Krueger's initial claim of publication bias is still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no evidence of a negative association between minimum wages and employment remains." In 2013, a meta-analysis of 16 UK studies found no significant effects on employment attributable to the minimum wage. a 2007 meta-analyses by David Neumark of 96 studies found a consistent, but not always statistically significant, negative effect on employment from increases in the minimum wage. Debate over consequences Minimum wage laws affect workers in most low-paid fields of employment and have usually been judged against the criterion of reducing poverty. Minimum wage laws receive less support from economists than from the general public. Despite decades of experience and economic research, debates about the costs and benefits of minimum wages continue today. Various groups have great ideological, political, financial, and emotional investments in issues surrounding minimum wage laws. For example, agencies that administer the laws have a vested interest in showing that "their" laws do not create unemployment, as do labor unions whose members' finances are protected by minimum wage laws. On the other side of the issue, low-wage employers such as restaurants finance the Employment Policies Institute, which has released numerous studies opposing the minimum wage. The presence of these powerful groups and factors means that the debate on the issue is not always based on dispassionate analysis. Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the other variables that affect employment. Studies have found that minimum wages have the following positive effects: Improves functioning of the low-wage labor market which may be characterized by employer-side market power (monopsony). Raises family incomes at the bottom of the income distribution, and lowers poverty. Positive impact on small business owners and industry. Encourages education, resulting in better paying jobs. Increases incentives to take jobs, as opposed to other methods of transferring income to the poor that are not tied to employment (such as food subsidies for the poor or welfare payments for the unemployed). Increased job growth and creation. Encourages efficiency and automation of industry. Removes low paying jobs, forcing workers to train for, and move to, higher paying jobs. Increases technological development. Costly technology that increases business efficiency is more appealing as the price of labor increases. Encourages people to join the workforce rather than pursuing money through illegal means, e.g., selling illegal drugs While other studies have found the following negative effects: Minimum wage alone is not effective at alleviating poverty, and in fact produces a net increase in poverty due to disemployment effects. As a labor market analogue of political-economic protectionism, it excludes low cost competitors from labor markets and hampers firms in reducing wage costs during trade downturns. This generates various industrial-economic inefficiencies. Reduces quantity demanded of workers, either through a reduction in the number of hours worked by individuals, or through a reduction in the number of jobs. Wage/price spiral Encourages employers to replace low-skilled workers with computers, such as self-checkout machines. Increases property crime and misery in poor communities by decreasing legal markets of production and consumption in those communities; Can result in the exclusion of certain groups (ethnic, gender etc.) from the labor force. Is less effective than other methods (e.g. the Earned Income Tax Credit) at reducing poverty, and is more damaging to businesses than those other methods. Discourages further education among the poor by enticing people to enter the job market. Discriminates against, through pricing out, less qualified workers (including newcomers to the labor market, e.g. young workers) by keeping them from accumulating work experience and qualifications, hence potentially graduating to higher wages later. Slows growth in the creation of low-skilled jobs Results in jobs moving to other areas or countries which allow lower-cost labor. Results in higher long-term unemployment. Results in higher prices for consumers, where products and services are produced by minimum-wage workers (though non-labor costs represent a greater proportion of costs to consumers in industries like fast food and discount retail) A widely circulated argument that the minimum wage was ineffective at reducing poverty was provided by George Stigler in 1949: Employment may fall more than in proportion to the wage increase, thereby reducing overall earnings; As uncovered sectors of the economy absorb workers released from the covered sectors, the decrease in wages in the uncovered sectors may exceed the increase in wages in the covered ones; The impact of the minimum wage on family income distribution may be negative unless the fewer but better jobs are allocated to members of needy families rather than to, for example, teenagers from families not in poverty; Forbidding employers to pay less than a legal minimum is equivalent to forbidding workers to sell their labor for less than the minimum wage. The legal restriction that employers cannot pay less than a legislated wage is equivalent to the legal restriction that workers cannot work at all in the protected sector unless they can find employers willing to hire them at that wage. That may be seen as a legal violation of human right to work in its most basic interpretation as "a right to engage in productive employment, and not to be prevented from doing so". In 2006, the International Labour Organization (ILO) argued that the minimum wage could not be directly linked to unemployment in countries that have suffered job losses. In April 2010, the Organisation for Economic Co-operation and Development (OECD) released a report arguing that countries could alleviate teen unemployment by "lowering the cost of employing low-skilled youth" through a sub-minimum training wage. A study of U.S. states showed that businesses' annual and average payrolls grow faster and employment grew at a faster rate in states with a minimum wage. The study showed a correlation, but did not claim to prove causation. Although strongly opposed by both the business community and the Conservative Party when introduced in the UK in 1999, the Conservatives reversed their opposition in 2000. Accounts differ as to the effects of the minimum wage. The Centre for Economic Performance found no discernible impact on employment levels from the wage increases, while the Low Pay Commission found that employers had reduced their rate of hiring and employee hours employed, and found ways to cause current workers to be more productive (especially service companies). The Institute for the Study of Labor found prices in the minimum wage sector rose significantly faster than prices in non-minimum wage sectors, in the four years following the implementation of the minimum wage. Neither trade unions nor employer organizations contest the minimum wage, although the latter had especially done so heavily until 1999. In 2014, supporters of minimum wage cited a study that found that job creation within the United States is faster in states that raised their minimum wages. In 2014, supporters of minimum wage cited news organizations who reported the state with the highest minimum-wage garnered more job creation than the rest of the United States. In 2014, in Seattle, Washington, liberal and progressive business owners who had supported the city's new $15 minimum wage said they might hold off on expanding their businesses and thus creating new jobs, due to the uncertain timescale of the wage increase implementation. However, subsequently at least two of the business owners quoted did expand. With regard to the economic effects of introducing minimum wage legislation in Germany in January 2015, recent developments have shown that the feared increase in unemployment has not materialized, however, in some economic sectors and regions of the country, it came to a decline in job opportunities particularly for temporary and part-time workers, and some low-wage jobs have disappeared entirely. Because of this overall positive development, the Deutsche Bundesbank revised its opinion, and ascertained that "the impact of the introduction of the minimum wage on the total volume of work appears to be very limited in the present business cycle". A 2019 study published in the American Journal of Preventive Medicine showed that in the United States, those states which have implemented a higher minimum wage saw a decline in the growth of suicide rates. The researchers say that for every one dollar increase, the annual suicide growth rate fell by 1.9%. The study covers all 50 states for the years 2006 to 2016. According to a 2020 US study, the cost of 10% minimum wage increases for grocery store workers were fully passed through to consumers as 0.4% higher grocery prices. Similarly, a 2021 study which covered 10,000 McDonald's restaurants in the US found that between 2016 and 2020, the cost of 10% minimum wage increases for McDonald's workers were passed through to customers as 1.4% increases in the price of a Big Mac. This results in minimum wage workers getting a lesser increase in their "real wage" than in their nominal wage, because any goods and services they purchase made with minimum-wage labor have now increased in cost, analogous to an increase in the sales tax. According to a 2019 review of the academic literature by Arindrajit Dube, "overall, the most up to date body of research from US, UK and other developed countries points to a very muted effect of minimum wages on employment, while significantly increasing the earnings of low paid workers." According to a 2021 study "The Minimum Wage, EITC, and Criminal Recidivism" a minimum wage increase of $0.50 reduces the probability an ex-incarcerated individual returns to prison within 3 years by 2.15%; these reductions come mainly from recidivism of property and drug crimes. Surveys of economists There used to be agreement among economists that the minimum wage adversely affected employment, but that consensus shifted in the early 1990s due to new research findings. According to one 2021 assessment, "there is no consensus on the employment effects of the minimum wage." According to a 1978 article in the American Economic Review, 90% of the economists surveyed agreed that the minimum wage increases unemployment among low-skilled workers. By 1992 the survey found 79% of economists in agreement with that statement, and by 2000, 46% were in full agreement with the statement and 28% agreed with provisos (74% total). The authors of the 2000 study also reweighted data from a 1990 sample to show that at that time 62% of academic economists agreed with the statement above, while 20% agreed with provisos and 18% disagreed. They state that the reduction on consensus on this question is "likely" due to the Card and Krueger research and subsequent debate. A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic Association (AEA). Whaples found that 47% respondents wanted the minimum wage eliminated, 38% supported an increase, 14% wanted it kept at the current level, and 1% wanted it decreased. Another survey in 2007 conducted by the University of New Hampshire Survey Center found that 73% of labor economists surveyed in the United States believed 150% of the then-current minimum wage would result in employment losses and 68% believed a mandated minimum wage would cause an increase in hiring of workers with greater skills. 31% felt that no hiring changes would result. Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998) polled labor economists at the top 40 research universities in the United States on a variety of questions in the summer of 1996. Their 65 respondents were nearly evenly divided when asked if the minimum wage should be increased. They argued that the different policy views were not related to views on whether raising the minimum wage would reduce teen employment (the median economist said there would be a reduction of 1%), but on value differences such as income redistribution. Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys, "the average level of support for the minimum wage is somewhat higher among labor economists than among AEA members." In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum wage who had signed the "Raise the Minimum Wage" statement published by the Economic Policy Institute. 95 of the 605 signatories responded. They found that a majority signed on the grounds that it transferred income from employers to workers, or equalized bargaining power between them in the labor market. In addition, a majority considered disemployment to be a moderate potential drawback to the increase they supported. In 2013, a diverse group of 37 economics professors was surveyed on their view of the minimum wage's impact on employment. 34% of respondents agreed with the statement, "Raising the federal minimum wage to $9 per hour would make it noticeably harder for low-skilled workers to find employment." 32% disagreed and the remaining respondents were uncertain or had no opinion on the question. 47% agreed with the statement, "The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy", while 11% disagreed. Alternatives Economists and other political commentators have proposed alternatives to the minimum wage. They argue that these alternatives may address the issue of poverty better than a minimum wage, as it would benefit a broader population of low wage earners, not cause any unemployment, and distribute the costs widely rather than concentrating it on employers of low wage workers. Basic income A basic income (or negative income tax – NIT) is a system of social security that periodically provides each citizen with a sum of money that is sufficient to live on frugally. Supporters of the basic-income idea argue that recipients of the basic income would have considerably more bargaining power when negotiating a wage with an employer, as there would be no risk of destitution for not taking the employment. As a result, jobseekers could spend more time looking for a more appropriate or satisfying job, or they could wait until a higher-paying job appeared. Alternatively, they could spend more time increasing their skills (via education and training), which would make them more suitable for higher-paying jobs, as well as provide numerous other benefits. Experiments on Basic Income and NIT in Canada and the USA show that people spent more time studying while the program was running. Proponents argue that a basic income that is based on a broad tax base would be more economically efficient than a minimum wage, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency. Guaranteed minimum income A guaranteed minimum income is another proposed system of social welfare provision. It is similar to a basic income or negative income tax system, except that it is normally conditional and subject to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a willingness to perform community services. Refundable tax credit A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a household to below zero, and result in a net payment to the taxpayer beyond their own payments into the tax system. Examples of refundable tax credits include the earned income tax credit and the additional child tax credit in the US, and working tax credits and child tax credits in the UK. Such a system is slightly different from a negative income tax, in that the refundable tax credit is usually only paid to households that have earned at least some income. This policy is more targeted against poverty than the minimum wage, because it avoids subsidizing low-income workers who are supported by high-income households (for example, teenagers still living with their parents). In the United States, earned income tax credit rates, also known as EITC or EIC, vary by state—some are refundable while other states do not allow a refundable tax credit. The federal EITC program has been expanded by a number of presidents including Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bill Clinton. In 1986, President Reagan described the EITC as "the best anti poverty, the best pro-family, the best job creation measure to come out of Congress." The ability of the earned income tax credit to deliver larger monetary benefits to the poor workers than an increase in the minimum wage and at a lower cost to society was documented in a 2007 report by the Congressional Budget Office. The Adam Smith Institute prefers cutting taxes on the poor and middle class instead of raising wages as an alternative to the minimum wage. Collective bargaining Italy, Sweden, Norway, Finland, and Denmark are developed nations where legislation stipulates no minimum wage. Instead, minimum wage standards in different sectors are set by collective bargaining. Particularly the Scandinavian countries have very high union participation rates. Wage subsidies Some economists such as Scott Sumner and Edmund Phelps advocate a wage subsidy program. A wage subsidy is a payment made by a government for work people do. It is based either on an hourly basis or by income earned. Advocates argue that the primary deficiencies of the EITC and the minimum wage are best avoided by a wage subsidy. However, the wage subsidy in the United States suffers from a lack of political support from either major political party. Education and training Providing education or funding apprenticeships or technical training can provide a bridge for low skilled workers to move into wages above a minimum wage. For example, Germany has adopted a state funded apprenticeship program that combines on-the-job and classroom training. Having more skills makes workers more valuable and more productive, but having a high minimum wage for low-skill jobs reduces the incentive to seek education and training. Moving some workers to higher-paying jobs will decrease the supply of workers willing to accept low-skill jobs, increasing the market wage for those low skilled jobs (assuming a stable labor market). However, in that solution the wage will still not increase above the marginal return for the role and will likely promote automation or business closure. South Korea United States In the United States, federal minimum wage laws had their origin with the Fair Labor Standards Act of 1938, which set the minimum wage at $0.25 per hour ($ in dollars). It has been increased multiple times up to 2020's rate of $7.25 per hour, which was set in 2009. As of 2020, there were 29 states with a minimum wage higher than the federal minimum, as well as 40+ cities with minimum wages that exceeded state or federal minimum wages. This results in almost 90% of U.S. minimum wage workers earning more than $7.25, such that the effective nationwide minimum wage, (the wage that the average minimum wage worker earns), was $11.80 in May 2019. The minimum wage in the United States is especially political. Politically, the Republican party has generally opposed increases to the minimum wage, while the progressive wing of the Democratic party, aligned with the Fight for 15 movement, has recently supported raising the federal minimum wage to $15 per hour. In 2021, the Congressional Budget Office released a report which estimated that incrementally raising the federal minimum wage to $15 an hour by 2025 would benefit 17 million workers, but would also reduce employment by 1.4 million people. See also Average worker's wage Economic inequality Employee benefits Family wage Garcia v. San Antonio Metropolitan Transit Authority Labor law List of minimum wages by country Minimum Wage Fixing Convention 1970 Negative and positive rights Price controls Salary cap Scratch Beginnings Thomas Sowell Walter E. Williams Working poor | stop the exploitation of workers in sweatshops, by employers who were thought to have unfair bargaining power over them. Over time, minimum wages came to be seen as a way to help lower-income families. Modern national laws enforcing compulsory union membership which prescribed minimum wages for their members were first passed in New Zealand and Australia in the 1890s. Although minimum wage laws are now in effect in many jurisdictions, differences of opinion exist about the benefits and drawbacks of a minimum wage. Supply and demand models suggest that there may be employment losses from minimum wages. However, minimum wages can increase the efficiency of the labor market in monopsony scenarios, where individual employers have a degree of wage-setting power over the market as a whole. Supporters of the minimum wage say it increases the standard of living of workers, reduces poverty, reduces inequality, and boosts morale. In contrast, opponents of the minimum wage say it increases poverty and unemployment because some low-wage workers "will be unable to find work...[and] will be pushed into the ranks of the unemployed". History Modern minimum wage laws trace their origin to the Ordinance of Labourers (1349), which was a decree by King Edward III that set a maximum wage for laborers in medieval England. King Edward III, who was a wealthy landowner, was dependent, like his lords, on serfs to work the land. In the autumn of 1348, the Black Plague reached England and decimated the population. The severe shortage of labor caused wages to soar and encouraged King Edward III to set a wage ceiling. Subsequent amendments to the ordinance, such as the Statute of Labourers (1351), increased the penalties for paying a wage above the set rates. While the laws governing wages initially set a ceiling on compensation, they were eventually used to set a living wage. An amendment to the Statute of Labourers in 1389 effectively fixed wages to the price of food. As time passed, the Justice of the Peace, who was charged with setting the maximum wage, also began to set formal minimum wages. The practice was eventually formalized with the passage of the Act Fixing a Minimum Wage in 1604 by King James I for workers in the textile industry. By the early 19th century, the Statutes of Labourers was repealed as the increasingly capitalistic United Kingdom embraced laissez-faire policies which disfavored regulations of wages (whether upper or lower limits). The subsequent 19th century saw significant labor unrest affect many industrial nations. As trade unions were decriminalized during the century, attempts to control wages through collective agreement were made. However, this meant that a uniform minimum wage was not possible. In Principles of Political Economy in 1848, John Stuart Mill argued that because of the collective action problems that workers faced in organisation, it was a justified departure from laissez-faire policies (or freedom of contract) to regulate people's wages and hours by the law. It was not until the 1890s that the first modern legislative attempts to regulate minimum wages were seen in New Zealand and Australia. The movement for a minimum wage was initially focused on stopping sweatshop labor and controlling the proliferation of sweatshops in manufacturing industries. The sweatshops employed large numbers of women and young workers, paying them what were considered to be substandard wages. The sweatshop owners were thought to have unfair bargaining power over their employees, and a minimum wage was proposed as a means to make them pay fairly. Over time, the focus changed to helping people, especially families, become more self-sufficient. In the United States, the late 19th-century ideas for favoring a minimum wage also coincided with the eugenics movement. As a consequence, some economists at the time, including Royal Meeker and Henry Rogers Seager, argued for the adoption of a minimum wage not only to support the worker, but to support their desired semi- and skilled laborers while forcing the undesired workers (including the idle, immigrants, women, racial minorities, and the disabled) out of the labor market. The result, over the longer term, would be to limit the nondesired workers' ability to earn money and have families, and thereby, remove them from the economists' ideal society. Minimum wage laws The first modern national minimum wages were enacted by the government recognition of unions which in turn established minimum wage policy among their members, as in New Zealand in 1894, followed by Australia in 1896 and the United Kingdom in 1909. In the United States, statutory minimum wages were first introduced nationally in 1938, and they were reintroduced and expanded in the United Kingdom in 1998. There is now legislation or binding collective bargaining regarding minimum wage in more than 90 percent of all countries. In the European Union, 21 out of 27 member states currently have national minimum wages. Other countries, such as Sweden, Finland, Denmark, Switzerland, Austria, and Italy, have no minimum wage laws, but rely on employer groups and trade unions to set minimum earnings through collective bargaining. Minimum wage rates vary greatly across many different jurisdictions, not only in setting a particular amount of money—for example $7.25 per hour ($14,500 per year) under certain US state laws (or $2.13 for employees who receive tips, which is known as the tipped minimum wage), $11.00 in the US state of Washington, or £8.91 (for those aged 25+) in the United Kingdom—but also in terms of which pay period (for example Russia and China set monthly minimum wages) or the scope of coverage. Currently the United States federal minimum wage is $7.25 per hour. However, some states do not recognize the minimum wage law, such as Louisiana and Tennessee. Other states have minimum wages below the federal minimum wage such as Georgia and Wyoming, although the federal minimum wage is enforced in those states. Some jurisdictions allow employers to count tips given to their workers as credit towards the minimum wage levels. India was one of the first developing countries to introduce minimum wage policy in its law in 1948. However, it is rarely implemented, even by contractors of government agencies. In Mumbai, as of 2017, the minimum wage was Rs. 348/day. India also has one of the most complicated systems with more than 1,200 minimum wage rates depending on the geographical region. Informal minimum wages Customs, tight labor markets, and extra-legal pressures from governments or labor unions can each produce a de facto minimum wage. So can international public opinion, by pressuring multinational companies to pay Third World workers wages usually found in more industrialized countries. The latter situation in Southeast Asia and Latin America was publicized in the 2000s, but it existed with companies in West Africa in the middle of the 20th century. Setting minimum wage Among the indicators that might be used to establish an initial minimum wage rate are ones that minimize the loss of jobs while preserving international competitiveness. Among these are general economic conditions as measured by real and nominal gross domestic product; inflation; labor supply and demand; wage levels, distribution and differentials; employment terms; productivity growth; labor costs; business operating costs; the number and trend of bankruptcies; economic freedom rankings; standards of living and the prevailing average wage rate. In the business sector, concerns include the expected increased cost of doing business, threats to profitability, rising levels of unemployment (and subsequent higher government expenditure on welfare benefits raising tax rates), and the possible knock-on effects to the wages of more experienced workers who might already be earning the new statutory minimum wage, or slightly more. Among workers and their representatives, political considerations weigh in as labor leaders seek to win support by demanding the highest possible rate. Other concerns include purchasing power, inflation indexing and standardized working hours. Economic models Supply and demand model According to the supply and demand model of the labor market shown in many economics textbooks, increasing the minimum wage decreases the employment of minimum-wage workers. One such textbook states: A firm's cost is an increasing function of the wage rate. The higher the wage rate, the fewer hours an employer will demand of employees. This is because, as the wage rate rises, it becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the right. Since higher wages increase the quantity supplied, the supply of labor curve is upward sloping, and is shown as a line moving up and to the right. If no minimum wage is in place, wages will adjust until quantity of labor demanded is equal to quantity supplied, reaching equilibrium, where the supply and demand curves intersect. Minimum wage behaves as a classical price floor on labor. Standard theory says that, if set above the equilibrium price, more labor will be willing to be provided by workers than will be demanded by employers, creating a surplus of labor, i.e. unemployment. The economic model of markets predicts the same of other commodities (like milk and wheat, for example): Artificially raising the price of the commodity tends to cause an increase in quantity supplied and a decrease in quantity demanded. The result is a surplus of the commodity. When there is a wheat surplus, the government buys it. Since the government does not hire surplus labor, the labor surplus takes the form of unemployment, which tends to be higher with minimum wage laws than without them. The supply and demand model implies that by mandating a price floor above the equilibrium wage, minimum wage laws will cause unemployment. This is because a greater number of people are willing to work at the higher wage while a smaller number of jobs will be available at the higher wage. Companies can be more selective in those whom they employ thus the least skilled and least experienced will typically be excluded. An imposition or increase of a minimum wage will generally only affect employment in the low-skill labor market, as the equilibrium wage is already at or below the minimum wage, whereas in higher skill labor markets the equilibrium wage is too high for a change in minimum wage to affect employment. Monopsony The supply and demand model predicts that raising the minimum wage helps workers whose wages are raised, and hurts people who are not hired (or lose their jobs) when companies cut back on employment. But proponents of the minimum wage hold that the situation is much more complicated than the model can account for. One complicating factor is possible monopsony in the labor market, whereby the individual employer has some market power in determining wages paid. Thus it is at least theoretically possible that the minimum wage may boost employment. Though single employer market power is unlikely to exist in most labor markets in the sense of the traditional 'company town,' asymmetric information, imperfect mobility, and the personal element of the labor transaction give some degree of wage-setting power to most firms. Modern economic theory predicts that although an excessive minimum wage may raise unemployment as it fixes a price above most demand for labor, a minimum wage at a more reasonable level can increase employment, and enhance growth and efficiency. This is because labor markets are monopsonistic and workers persistently lack bargaining power. When poorer workers have more to spend it stimulates effective aggregate demand for goods and services. Criticisms of the supply and demand model The argument that a minimum wage decreases employment is based on a simple supply and demand model of the labor market. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wynne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifiable theory, and consequently empirical evidence hardly exists for that model. Graham White argues, partially on the basis of Sraffianism, that the policy of increased labor market flexibility, including the reduction of minimum wages, does not have an "intellectually coherent" argument in economic theory. Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the standard textbook model for the minimum wage is ambiguous, and that the standard theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector market, where "the self-employed, service workers, and farm workers are typically excluded from minimum-wage coverage... [and with] one sector with minimum-wage coverage and the other without it [and possible mobility between the two]," is the basis for better analysis. Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the predictions derived from the textbook model definitely do not carry over to the two-sector case. Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook model simply cannot be relied on." An alternate view of the labor market has low-wage labor markets characterized as monopsonistic competition wherein buyers (employers) have significantly more market power than do sellers (workers). This monopsony could be a result of intentional collusion between employers, or naturalistic factors such as segmented markets, search costs, information costs, imperfect mobility and the personal element of labor markets. In such a case a simple supply and demand graph would not yield the quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate labor supply would remain unchanged, instead of using the upward labor supply curve shown in a supply and demand diagram, monopsonistic employers would use a steeper upward sloping curve corresponding to marginal expenditures to yield the intersection with the supply curve resulting in a wage rate lower than would be the case under competition. Also, the amount of labor sold would also be lower than the competitive optimal allocation. Such a case is a type of market failure and results in workers being paid less than their marginal value. Under the monopsonistic assumption, an appropriately set minimum wage could increase both wages and employment, with the optimal level being equal to the marginal product of labor. This view emphasizes the role of minimum wages as a market regulation policy akin to antitrust policies, as opposed to an illusory "free lunch" for low-wage workers. Another reason minimum wage may not affect employment in certain industries is that the demand for the product the employees produce is highly inelastic. For example, if management is forced to increase wages, management can pass on the increase in wage to consumers in the form of higher prices. Since demand for the product is highly inelastic, consumers continue to buy the product at the higher price and so the manager is not forced to lay off workers. Economist Paul Krugman argues this explanation neglects to explain why the firm was not charging this higher price absent the minimum wage. Three other possible reasons minimum wages do not affect employment were suggested by Alan Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage may "render moot" the potential problem of recruiting workers at a higher wage than current workers; and minimum wage workers might represent such a small proportion of a business's cost that the increase is too small to matter. He admits that he does not know if these are correct, but argues that "the list demonstrates that one can accept the new empirical findings and still be a card-carrying economist." Mathematical models of the minimum wage and frictional labor markets The following mathematical models are more quantitative in orientation, and highlight some of the difficulties in determining the impact of the minimum wage on labor market outcomes. Specifically, these models focus on labor markets with frictions. Welfare and labor market participation Assume that the decision to participate in the labor market results from a trade-off between being an unemployed job seeker and not participating at all. All individuals whose expected utility outside the labor market is less than the expected utility of an unemployed person decide to participate in the labor market. In the basic search and matching model, the expected utility of unemployed persons and that of employed persons are defined by: Let be the wage, the interest rate, the instantaneous income of unemployed persons, the exogenous job destruction rate, the labor market tightness, and the job finding rate. The profits and expected from a filled job and a vacant one are:where is the cost of a vacant job and is the productivity. When the free entry condition is satisfied, these two equalities yield the following relationship between the wage and labor market tightness : If represents a minimum wage that applies to all workers, this equation completely determines the equilibrium value of the labor market tightness . There are two conditions associated with the matching function:This implies that is a decreasing function of the minimum wage , and so is the job finding rate . A hike in the minimum wage degrades the profitability of a job, so firms post fewer vacancies and the job finding rate falls off. Now let's rewrite to be:Using the relationship between the wage and labor market tightness to eliminate the wage from the last equation gives us: If we maximize in this equation, with respect to the labor market tightness, we find that:where is the elasticity of the matching function:This result shows that the expected utility of unemployed workers is maximized when the minimum wage is set at a level that corresponds to the wage level of the decentralized economy in which the bargaining power parameter is equal to the elasticity . The level of the negotiated wage is . If , then an increase in the minimum wage increases participation and the unemployment rate, with an ambiguous impact on employment. When the bargaining power of workers is less than , an increases in the minimum wage improves the welfare of the unemployed – this suggests that minimum wage hikes can improve labor market efficiency, at least up to the point when bargaining power equals . On the other hand, if , any increases in the minimum wage entails a decline in labor market participation and an increase in unemployment. Job search effort In the model just presented, we found that the minimum wage always increases unemployment. This result does not necessarily hold when the search effort of workers in endogenous. Consider a model where the intensity of the job search is designated by the scalar , which can be interpreted as the amount of time and/or intensity of the effort devoted to search. Assume that the arrival rate of job offers is and that the wage distribution is degenerated to a single wage . Denote to be the cost arising from the search effort, with . Then the discounted utilities are given by:Therefore, the optimal search effort is such that the marginal cost of performing the search is equation to the marginal return:This implies that the optimal search effort increases as the difference between the expected utility of the job holder and the expected utility of the job seeker grows. In fact, this difference actually grows with the wage. To see this, take the difference of the two discounted utilities to find:Then differentiating with respect to and rearranging gives us:where is the optimal search effort. This implies that a wage increase drives up job search effort and, therefore, the job finding rate. Additionally, the unemployment rate at equilibrium is given by:A hike in the wage, which increases the search effort and the job finding rate, decreases the unemployment rate. So it is possible that a hike in the minimum wage may, by boosting the search effort of job seekers, boost employment. Taken in sum with the previous section, the minimum wage in labor markets with frictions can improve employment and decrease the unemployment rate when it is sufficiently low. However, a high minimum wage is detrimental to employment and increases the unemployment rate. Empirical studies Economists disagree as to the measurable impact of minimum wages in practice. This disagreement usually takes the form of competing empirical tests of the elasticities of supply and demand in labor markets and the degree to which markets differ from the efficiency that models of perfect competition predict. Economists have done empirical studies on different aspects of the minimum wage, including: Employment effects, the most frequently studied aspect Effects on the distribution of wages and earnings among low-paid and higher-paid workers Effects on the distribution of incomes among low-income and higher-income families Effects on the skills of workers through job training and the deferring of work to acquire education Effects on prices and profits Effects on on-the-job training Until the mid-1990s, a general consensus existed among economists, both conservative and liberal, that the minimum wage reduced employment, especially among younger and low-skill workers. In addition to the basic supply-demand intuition, there were a number of empirical studies that supported this view. For example, Gramlich (1976) found that many of the benefits went to higher income families, and that teenagers were made worse off by the unemployment associated with the minimum wage. Brown et al. (1983) noted that time series studies to that point had found that for a 10 percent increase in the minimum wage, there was a decrease in teenage employment of 1–3 percent. However, the studies found wider variation, from 0 to over 3 percent, in their estimates for the effect on teenage unemployment (teenagers without a job and looking for one). In contrast to the simple supply and demand diagram, it was commonly found that teenagers withdrew from the labor force in response to the minimum wage, which produced the possibility of equal reductions in the supply as well as the demand for labor at a higher minimum wage and hence no impact on the unemployment rate. Using a variety of specifications of the employment and unemployment equations (using ordinary least squares vs. generalized least squares regression procedures, and linear vs. logarithmic specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent decrease in teenage employment, and no change in the teenage unemployment rate. The study also found a small, but statistically significant, increase in unemployment for adults aged 20–24. Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum wage was declining, because it had not increased since 1981. She found that a 10% increase in the minimum wage decreased the absolute teenage employment by 0.6%, with no effect on the teen or young adult unemployment rates. Some research suggests that the unemployment effects of small minimum wage increases are dominated by other factors. In Florida, where voters approved an increase in 2004, a follow-up comprehensive study after the increase confirmed a strong economy with increased employment above previous years in Florida and better than in the US as a whole. When it comes to on-the-job training, some believe the increase in wages is taken out of training expenses. A 2001 empirical study found that there is "no evidence that minimum wages reduce training, and little evidence that they tend to increase training." The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage. ... High minimum wages, however, particularly in rigid labour markets, do appear to hit employment. France has the rich world's highest wage floor, at more than 60% of the median for adults and a far bigger fraction of the typical wage for the young. This helps explain why France also has shockingly high rates of youth unemployment: 26% for 15- to 24-year-olds." A 2019 study in the Quarterly Journal of Economics found that minimum wage increases did not have an impact on the overall number of low-wage jobs in the five years subsequent to the wage increase. However, it did find disemployment in 'tradeable' sectors, defined as those sectors most reliant on entry level or low skilled labor. In another study, which shared authors with the above, published in the American Economic Review found that a large and persistent increase in the minimum wage in Hungary produced some disemployment |
or striped mullet, Mugil cephalus, important food fish species in the family Mugilidae. Goatfish, or "red mullets", of the family Mullidae; in particular, red mullets of the genus Mullus Malagasy mountain mullet, Acentrogobius therezieni, a species of fish in the family Gobiidae endemic to Madagascar Pearl mullet, Chalcalburnus tarichi, a species of ray-finned fish in the family Cyprinidae native to Turkey Shorthead redhorse, Moxostoma macrolepidotum, a freshwater fish of North America, also known as common mullet, mullet, redhorse mullet | Wisconsin Other Mullet, a type of star in heraldry Mullet (film), a 2001 Australian film The Mullets (TV series), a UPN sitcom Mullets (comic strip), a short-lived comic strip by Rick Stromoski and Steve McGarry A person born in Arundel due to the presence of the Mullet fish in the local river Norman Mullet, the chief superintendent in the British television show A Touch of Frost (TV series) Mullet Festival, annual event held in Niceville, Florida Land Mullet, Egernia major, one of the largest members of the skink family (Scincidae) native to Australia See also American Mullet, a 2001 documentary film directed by Jennifer Arnold Mullet Key, a historic island near Crystal River, Florida Mullet Fever, the fifth |
"'Aroint thee, witch!' the rump-fed ronyon cries./Her husband's to Aleppo gone, master o' the Tiger" (1.3.6–7). This has been thought to allude to the Tiger, a ship that returned to England 27 June 1606 after a disastrous voyage in which many of the crew were killed by pirates. A few lines later the witch speaks of the sailor, "He shall live a man forbid:/Weary se'nnights nine times nine" (1.3.21–22). The real ship was at sea 567 days, the product of 7x9x9, which has been taken as a confirmation of the allusion, which if correct, confirms that the witch scenes were either written or amended later than July 1606. The play is not considered to have been written any later than 1607, since, as Kermode notes, there are "fairly clear allusions to the play in 1607". One notable reference is in Francis Beaumont's Knight of the Burning Pestle, first performed in 1607. The following lines (Act V, Scene 1, 24–30) are, according to scholars, a clear allusion to the scene in which Banquo's ghost haunts Macbeth at the dinner table: When thou art at thy table with thy friends, Merry in heart, and filled with swelling wine, I'll come in midst of all thy pride and mirth, Invisible to all men but thyself, And whisper such a sad tale in thine ear Shall make thee let the cup fall from thy hand, And stand as mute and pale as death itself. Macbeth was first printed in the First Folio of 1623 and the Folio is the only source for the text. Some scholars contend that the Folio text was abridged and rearranged from an earlier manuscript or prompt book. Often cited as interpolation are stage cues for two songs, whose lyrics are not included in the Folio but are included in Thomas Middleton's play The Witch, which was written between the accepted date for Macbeth (1606) and the printing of the Folio. Many scholars believe these songs were editorially inserted into the Folio, though whether they were Middleton's songs or preexisting songs is not certain. It is also widely believed that the character of Hecate, as well as some lines of the First Witch (4.1 124–131), were not part of Shakespeare's original play but were added by the Folio editors and possibly written by Middleton, though "there is no completely objective proof" of such interpolation. Themes and motifs Macbeth is an anomaly among Shakespeare's tragedies in certain critical ways. It is short: more than a thousand lines shorter than Othello and King Lear, and only slightly more than half as long as Hamlet. This brevity has suggested to many critics that the received version is based on a heavily cut source, perhaps a prompt-book for a particular performance. This would reflect other Shakespeare plays existing in both Quarto and the Folio, where the Quarto versions are usually longer than the Folio versions. Macbeth was first printed in the First Folio, but has no Quarto version – if there were a Quarto, it would probably be longer than the Folio version. That brevity has also been connected to other unusual features: the fast pace of the first act, which has seemed to be "stripped for action"; the comparative flatness of the characters other than Macbeth; and the oddness of Macbeth himself compared with other Shakespearean tragic heroes. A. C. Bradley, in considering this question, concluded the play "always was an extremely short one", noting the witch scenes and battle scenes would have taken up some time in performance, remarking, "I do not think that, in reading, we feel Macbeth to be short: certainly we are astonished when we hear it is about half as long as Hamlet. Perhaps in the Shakespearean theatre too it seemed to occupy a longer time than the clock recorded." As a tragedy of character At least since the days of Alexander Pope and Samuel Johnson, analysis of the play has centred on the question of Macbeth's ambition, commonly seen as so dominant a trait that it defines the character. Johnson asserted that Macbeth, though esteemed for his military bravery, is wholly reviled. This opinion recurs in critical literature, and, according to Caroline Spurgeon, is supported by Shakespeare himself, who apparently intended to degrade his hero by vesting him with clothes unsuited to him and to make Macbeth look ridiculous by several exaggerations he applies: His garments seem either too big or too small for him – as his ambition is too big and his character too small for his new and unrightful role as king. When he feels as if "dressed in borrowed robes", after his new title as Thane of Cawdor, prophesied by the witches, has been confirmed by Ross (I, 3, ll. 108–109), Banquo comments: "New honours come upon him,Like our strange garments, cleave not to their mould,But with the aid of use" (I, 3, ll. 145–146). And, at the end, when the tyrant is at bay at Dunsinane, Caithness sees him as a man trying in vain to fasten a large garment on him with too small a belt: "He cannot buckle his distemper'd cause Within the belt of rule" (V, 2, ll. 14–15) while Angus sums up what everybody thinks ever since Macbeth's accession to power: "now does he feel his title Hang loose about him, like a giant's robe upon a dwarfish thief" (V, 2, ll. 18–20). Like Richard III, but without that character's perversely appealing exuberance, Macbeth wades through blood until his inevitable fall. As Kenneth Muir writes, "Macbeth has not a predisposition to murder; he has merely an inordinate ambition that makes murder itself seem to be a lesser evil than failure to achieve the crown." Some critics, such as E. E. Stoll, explain this characterisation as a holdover from Senecan or medieval tradition. Shakespeare's audience, in this view, expected villains to be wholly bad, and Senecan style, far from prohibiting a villainous protagonist, all but demanded it. Yet for other critics, it has not been so easy to resolve the question of Macbeth's motivation. Robert Bridges, for instance, perceived a paradox: a character able to express such convincing horror before Duncan's murder would likely be incapable of committing the crime. For many critics, Macbeth's motivations in the first act appear vague and insufficient. John Dover Wilson hypothesised that Shakespeare's original text had an extra scene or scenes where husband and wife discussed their plans. This interpretation is not fully provable; however, the motivating role of ambition for Macbeth is universally recognised. The evil actions motivated by his ambition seem to trap him in a cycle of increasing evil, as Macbeth himself recognises: "I am in bloodStepp'd in so far that, should I wade no more, Returning were as tedious as go o'er." While working on Russian translations of Shakespeare's works, Boris Pasternak compared Macbeth to Raskolnikov, the protagonist of Crime and Punishment by Fyodor Dostoevsky. Pasternak argues that "neither Macbeth or Raskolnikov is a born criminal or a villain by nature. They are turned into criminals by faulty rationalizations, by deductions from false premises." He goes on to argue that Lady Macbeth is "feminine ... one of those active, insistent wives" who becomes her husband's "executive, more resolute and consistent than he is himself". According to Pasternak, she is only helping Macbeth carry out his own wishes, to her own detriment. As a tragedy of moral order The disastrous consequences of Macbeth's ambition are not limited to him. Almost from the moment of the murder, the play depicts Scotland as a land shaken by inversions of the natural order. Shakespeare may have intended a reference to the great chain of being, although the play's images of disorder are mostly not specific enough to support detailed intellectual readings. He may also have intended an elaborate compliment to James's belief in the divine right of kings, although this hypothesis, outlined at greatest length by Henry N. Paul, is not universally accepted. As in Julius Caesar, though, perturbations in the political sphere are echoed and even amplified by events in the material world. Among the most often depicted of the inversions of the natural order is sleep. Macbeth's announcement that he has "murdered sleep" is figuratively mirrored in Lady Macbeth's sleepwalking. Macbeths generally accepted indebtedness to medieval tragedy is often seen as significant in the play's treatment of moral order. Glynne Wickham connects the play, through the Porter, to a mystery play on the harrowing of hell. Howard Felperin argues that the play has a more complex attitude toward "orthodox Christian tragedy" than is often admitted; he sees a kinship between the play and the tyrant plays within the medieval liturgical drama. The theme of androgyny is often seen as a special aspect of the theme of disorder. Inversion of normative gender roles is most famously associated with the witches and with Lady Macbeth as she appears in the first act. Whatever Shakespeare's degree of sympathy with such inversions, the play ends with a thorough return to normative gender values. Some feminist psychoanalytic critics, such as Janet Adelman, have connected the play's treatment of gender roles to its larger theme of inverted natural order. In this light, Macbeth is punished for his violation of the moral order by being removed from the cycles of nature (which are figured as female); nature itself (as embodied in the movement of Birnam Wood) is part of the restoration of moral order. As a poetic tragedy Critics in the early twentieth century reacted against what they saw as an excessive dependence on the study of character in criticism of the play. This dependence, though most closely associated with Andrew Cecil Bradley, is clear as early as the time of Mary Cowden Clarke, who offered precise, if fanciful, accounts of the predramatic lives of Shakespeare's female leads. She suggested, for instance, that the child Lady Macbeth refers to in the first act died during a foolish military action. Witchcraft and evil In the play, the Three Witches represent darkness, chaos, and conflict, while their role is as agents and witnesses. Their presence communicates treason and impending doom. During Shakespeare's day, witches were seen as worse than rebels, "the most notorious traytor and rebell that can be". They were not only political traitors, but spiritual traitors as well. Much of the confusion that springs from them comes from their ability to straddle the play's borders between reality and the supernatural. They are so deeply entrenched in both worlds that it is unclear whether they control fate, or whether they are merely its agents. They defy logic, not being subject to the rules of the real world. The witches' lines in the first act: "Fair is foul, and foul is fair: Hover through the fog and filthy air" are often said to set the tone for the rest of the play by establishing a sense of confusion. Indeed, the play is filled with situations where evil is depicted as good, while good is rendered evil. The line "Double, double toil and trouble," communicates the witches' intent clearly: they seek only trouble for the mortals around them. The witches' spells are remarkably similar to the spells of the witch Medusa in Anthony Munday's play Fidele and Fortunio published in 1584, and Shakespeare may have been influenced by these. While the witches do not tell Macbeth directly to kill King Duncan, they use a subtle form of temptation when they tell Macbeth that he is destined to be king. By placing this thought in his mind, they effectively guide him on the path to his own destruction. This follows the pattern of temptation used at the time of Shakespeare. First, they argued, a thought is put in a man's mind, then the person may either indulge in the thought or reject it. Macbeth indulges in it, while Banquo rejects. According to J. A. Bryant Jr., Macbeth also makes use of Biblical parallels, notably between King Duncan's murder and the murder of Christ: Superstition and "The Scottish Play" While many today would say that any misfortune surrounding a production is mere coincidence, actors and others in the theatre industry often consider it bad luck to mention Macbeth by name while inside a theatre, and sometimes refer to it indirectly, for example as "The Scottish Play", or "MacBee", or when referring to the characters and not the play, "Mr. and Mrs. M", or "The Scottish King". This is because Shakespeare (or the play's revisers) are said to have used the spells of real witches in his text, purportedly angering the witches and causing them to curse the play. Thus, to say the name of the play inside a theatre is believed to doom the production to failure, and perhaps cause physical injury or death to cast members. There are stories of accidents, misfortunes and even deaths taking place during runs of Macbeth. According to the actor Sir Donald Sinden, in his Sky Arts TV series Great West End Theatres, contrary to popular myth, Shakespeare's tragedy Macbeth is not the unluckiest play as superstition likes to portray it. Exactly the opposite! The origin of the unfortunate moniker dates back to repertory theatre days when each town and village had at least one theatre to entertain the public. If a play was not doing well, it would invariably get 'pulled' and replaced with a sure-fire audience pleaser – Macbeth guaranteed full-houses. So when the weekly theatre newspaper, The Stage was published, listing what was on in each theatre in the country, it was instantly noticed what shows had not worked the previous week, as they had been replaced by a definite crowd-pleaser. More actors have died during performances of Hamlet than in the "Scottish play" as the profession still calls it. It is forbidden to quote from it backstage as this could cause the current play to collapse and have to be replaced, causing possible unemployment. Several methods exist to dispel the curse, depending on the actor. One, attributed to Michael York, is to immediately leave the building the stage is in with the person who uttered the name, walk around it three times, spit over their left shoulders, say an obscenity then wait to be invited back into the building. A related practice is to spin around three times as fast as possible on the spot, sometimes accompanied by spitting over their shoulder, and uttering an obscenity. Another popular "ritual" is to leave the room, knock three times, be invited in, and then quote a line from Hamlet. Yet another is to recite lines from The Merchant of Venice, thought to be a lucky play. Sir Patrick Stewart, on the radio program Ask Me Another, asserted "if you have played the role of the Scottish thane, | as the profession still calls it. It is forbidden to quote from it backstage as this could cause the current play to collapse and have to be replaced, causing possible unemployment. Several methods exist to dispel the curse, depending on the actor. One, attributed to Michael York, is to immediately leave the building the stage is in with the person who uttered the name, walk around it three times, spit over their left shoulders, say an obscenity then wait to be invited back into the building. A related practice is to spin around three times as fast as possible on the spot, sometimes accompanied by spitting over their shoulder, and uttering an obscenity. Another popular "ritual" is to leave the room, knock three times, be invited in, and then quote a line from Hamlet. Yet another is to recite lines from The Merchant of Venice, thought to be a lucky play. Sir Patrick Stewart, on the radio program Ask Me Another, asserted "if you have played the role of the Scottish thane, then you are allowed to say the title, any time anywhere". Performance history Shakespeare's day to the Interregnum The only eyewitness account of Macbeth in Shakespeare's lifetime was recorded by Simon Forman, who saw a performance at the Globe on 20 April 1610. Scholars have noted discrepancies between Forman's account and the play as it appears in the Folio. For example, he makes no mention of the apparition scene, or of Hecate, of the man not of woman born, or of Birnam Wood. However, Clark observes that Forman's accounts were often inaccurate and incomplete (for instance omitting the statue scene from The Winter's Tale) and his interest did not seem to be in "giving full accounts of the productions". As mentioned above, the Folio text is thought by some to be an alteration of the original play. This has led to the theory that the play as we know it from the Folio was an adaptation for indoor performance at the Blackfriars Theatre (which was operated by the King's Men from 1608) – and even speculation that it represents a specific performance before King James. The play contains more musical cues than any other play in the canon as well as a significant use of sound effects. Restoration and eighteenth century All theatres were closed down by the Puritan government on 6 September 1642. Upon the restoration of the monarchy in 1660, two patent companies (the King's Company and the Duke's Company) were established, and the existing theatrical repertoire divided between them. Sir William Davenant, founder of the Duke's Company, adapted Shakespeare's play to the tastes of the new era, and his version would dominate on stage for around eighty years. Among the changes he made were the expansion of the role of the witches, introducing new songs, dances and 'flying', and the expansion of the role of Lady Macduff as a foil to Lady Macbeth. There were, however, performances outside the patent companies: among the evasions of the Duke's Company's monopoly was a puppet version of Macbeth. Macbeth was a favourite of the seventeenth-century diarist Samuel Pepys, who saw the play on 5 November 1664 ("admirably acted"), 28 December 1666 ("most excellently acted"), ten days later on 7 January 1667 ("though I saw it lately, yet [it] appears a most excellent play in all respects"), on 19 April 1667 ("one of the best plays for a stage ... that ever I saw"), again on 16 October 1667 ("was vexed to see Young, who is but a bad actor at best, act Macbeth in the room of Betterton, who, poor man! is sick"), and again three weeks later on 6 November 1667 ("[at] Macbeth, which we still like mightily"), yet again on 12 August 1668 ("saw Macbeth, to our great content"), and finally on 21 December 1668, on which date the king and court were also present in the audience. The first professional performances of Macbeth in North America were probably those of The Hallam Company. In 1744, David Garrick revived the play, abandoning Davenant's version and instead advertising it "as written by Shakespeare". In fact this claim was largely false: he retained much of Davenant's more popular business for the witches, and himself wrote a lengthy death speech for Macbeth. And he cut more than 10% of Shakespeare's play, including the drunken porter, the murder of Lady Macduff's son, and Malcolm's testing of Macduff. Hannah Pritchard was his greatest stage partner, having her premiere as his Lady Macbeth in 1747. He would later drop the play from his repertoire upon her retirement from the stage. Mrs. Pritchard was the first actress to achieve acclaim in the role of Lady Macbeth – at least partly due to the removal of Davenant's material, which made irrelevant moral contrasts with Lady Macduff. Garrick's portrayal focused on the inner life of the character, endowing him with an innocence vacillating between good and evil, and betrayed by outside influences. He portrayed a man capable of observing himself, as if a part of him remained untouched by what he had done, the play moulding him into a man of sensibility, rather than him descending into a tyrant. John Philip Kemble first played Macbeth in 1778. Although usually regarded as the antithesis of Garrick, Kemble nevertheless refined aspects of Garrick's portrayal into his own. However it was the "towering and majestic" Sarah Siddons (Kemble's sister) who became a legend in the role of Lady Macbeth. In contrast to Hannah Pritchard's savage, demonic portrayal, Siddons' Lady Macbeth, while terrifying, was nevertheless – in the scenes in which she expresses her regret and remorse – tenderly human. And in portraying her actions as done out of love for her husband, Siddons deflected from him some of the moral responsibility for the play's carnage. Audiences seem to have found the sleepwalking scene particularly mesmerising: Hazlitt said of it that "all her gestures were involuntary and mechanical ... She glided on and off the stage almost like an apparition." In 1794, Kemble dispensed with the ghost of Banquo altogether, allowing the audience to see Macbeth's reaction as his wife and guests see it, and relying upon the fact that the play was so well known that his audience would already be aware that a ghost enters at that point. Ferdinand Fleck, notable as the first German actor to present Shakespeare's tragic roles in their fullness, played Macbeth at the Berlin National Theatre from 1787. Unlike his English counterparts, he portrayed the character as achieving his stature after the murder of Duncan, growing in presence and confidence: thereby enabling stark contrasts, such as in the banquet scene, which he ended babbling like a child. Nineteenth century Performances outside the patent theatres were instrumental in bringing the monopoly to an end. Robert Elliston, for example, produced a popular adaptation of Macbeth in 1809 at the Royal Circus described in its publicity as "this matchless piece of pantomimic and choral performance", which circumvented the illegality of speaking Shakespeare's words through mimed action, singing, and doggerel verse written by J. C. Cross. In 1809, in an unsuccessful attempt to take Covent Garden upmarket, Kemble installed private boxes, increasing admission prices to pay for the improvements. The inaugural run at the newly renovated theatre was Macbeth, which was disrupted for over two months with cries of "Old prices!" and "No private boxes!" until Kemble capitulated to the protestors' demands. Edmund Kean at Drury Lane gave a psychological portrayal of the central character, with a common touch, but was ultimately unsuccessful in the role. However he did pave the way for the most acclaimed performance of the nineteenth century, that of William Charles Macready. Macready played the role over a 30-year period, firstly at Covent Garden in 1820 and finally in his retirement performance. Although his playing evolved over the years, it was noted throughout for the tension between the idealistic aspects and the weaker, venal aspects of Macbeth's character. His staging was full of spectacle, including several elaborate royal processions. In 1843 the Theatres Regulation Act finally brought the patent companies' monopoly to an end. From that time until the end of the Victorian era, London theatre was dominated by the actor-managers, and the style of presentation was "pictorial" – proscenium stages filled with spectacular stage-pictures, often featuring complex scenery, large casts in elaborate costumes, and frequent use of tableaux vivant. Charles Kean (son of Edmund), at London's Princess's Theatre from 1850 to 1859, took an antiquarian view of Shakespeare performance, setting his Macbeth in a historically accurate eleventh-century Scotland. His leading lady, Ellen Tree, created a sense of the character's inner life: The Times critic saying "The countenance which she assumed ... when luring on Macbeth in his course of crime, was actually appalling in intensity, as if it denoted a hunger after guilt." At the same time, special effects were becoming popular: for example in Samuel Phelps' Macbeth the witches performed behind green gauze, enabling them to appear and disappear using stage lighting. In 1849, rival performances of the play sparked the Astor Place riot in Manhattan. The popular American actor Edwin Forrest, whose Macbeth was said to be like "the ferocious chief of a barbarous tribe" played the central role at the Broadway Theatre to popular acclaim, while the "cerebral and patrician" English actor Macready, playing the same role at the Astor Place Opera House, suffered constant heckling. The existing enmity between the two men (Forrest had openly hissed Macready at a recent performance of Hamlet in Britain) was taken up by Forrest's supporters – formed from the working class and lower middle class and anti-British agitators, keen to attack the upper-class pro-British patrons of the Opera House and the colonially-minded Macready. Nevertheless, Macready performed the role again three days later to a packed house while an angry mob gathered outside. The militia tasked with controlling the situation fired into the mob. In total, 31 rioters were killed and over 100 injured. Charlotte Cushman is unique among nineteenth century interpreters of Shakespeare in achieving stardom in roles of both genders. Her New York debut was as Lady Macbeth in 1836, and she would later be admired in London in the same role in the mid-1840s. Helen Faucit was considered the embodiment of early-Victorian notions of femininity. But for this reason she largely failed when she eventually played Lady Macbeth in 1864: her serious attempt to embody the coarser aspects of Lady Macbeth's character jarred harshly with her public image. Adelaide Ristori, the great Italian actress, brought her Lady Macbeth to London in 1863 in Italian, and again in 1873 in an English translation cut in such a way as to be, in effect, Lady Macbeth's tragedy. Henry Irving was the most successful of the late-Victorian actor-managers, but his Macbeth failed to curry favour with audiences. His desire for psychological credibility reduced certain aspects of the role: He described Macbeth as a brave soldier but a moral coward, and played him untroubled by conscience – clearly already contemplating the murder of Duncan before his encounter with the witches. Irving's leading lady was Ellen Terry, but her Lady Macbeth was unsuccessful with the public, for whom a century of performances influenced by Sarah Siddons had created expectations at odds with Terry's conception of the role. Late nineteenth-century European Macbeths aimed for heroic stature, but at the expense of subtlety: Tommaso Salvini in Italy and Adalbert Matkowsky in Germany were said to inspire awe, but elicited little pity. 20th century to present Two developments changed the nature of Macbeth performance in the 20th century: first, developments in the craft of acting itself, especially the ideas of Stanislavski and Brecht; and second, the rise of the dictator as a political icon. The latter has not always assisted the performance: it is difficult to sympathise with a Macbeth based on Hitler, Stalin, or Idi Amin. Barry Jackson, at the Birmingham Repertory Theatre in 1923, was the first of the 20th-century directors to costume Macbeth in modern dress. In 1936, a decade before his film adaptation of the play, Orson Welles directed Macbeth for the Negro Theatre Unit of the Federal Theatre Project at the Lafayette Theatre in Harlem, using black actors and setting the action in Haiti: with drums and Voodoo rituals to establish the Witches scenes. The production, dubbed The Voodoo Macbeth, proved inflammatory in the aftermath of the Harlem riots, accused of making fun of black culture and as "a campaign to burlesque negroes" until Welles persuaded crowds that his use of black actors and voodoo made important cultural statements. A performance which is frequently referenced as an example of the play's curse was the outdoor production directed by Burgess Meredith in 1953 in the British colony of Bermuda, starring Charlton Heston. Using the imposing spectacle of Fort St. Catherine as a key element of the set, the production was plagued by a host of mishaps, including Charlton Heston being burned when his tights caught fire. The critical consensus is that there have been three great Macbeths on the English-speaking stage in the 20th century, all of them commencing at Stratford-upon-Avon: Laurence Olivier in 1955, Ian McKellen in 1976 and Antony Sher in 1999. Olivier's portrayal (directed by Glen Byam Shaw, with Vivien Leigh as Lady Macbeth) was immediately hailed as a masterpiece. Kenneth Tynan expressed the view that it succeeded because Olivier built the role to a climax at the end of the play, whereas most actors spend all they have in the first two acts. The play caused grave difficulties for the Royal Shakespeare Company, especially at the (then) Shakespeare Memorial Theatre. Peter Hall's 1967 production was (in Michael Billington's words) "an acknowledged disaster" with the use of real leaves from Birnham Wood getting unsolicited first-night laughs, and Trevor Nunn's 1974 production was (Billington again) "an over-elaborate religious spectacle". But Nunn achieved success for the RSC in his 1976 production at the intimate Other Place, with Ian McKellen and Judi Dench in the central roles. A small cast worked within a simple circle, and McKellen's Macbeth had nothing noble or likeable about him, being a manipulator in a world of manipulative characters. They were a young couple, physically passionate, "not monsters but recognisable human beings", but their relationship atrophied as the action progressed. The RSC again achieved critical success in Gregory Doran's 1999 production at The Swan, with Antony Sher and Harriet Walter in the central roles, once again demonstrating the suitability of the play for smaller venues. Doran's witches spoke their lines to a theatre in absolute darkness, and the opening visual image was the entrance of Macbeth and Banquo in the berets and fatigues of modern warfare, carried on the shoulders of triumphant troops. In contrast to Nunn, Doran presented a world in which king Duncan and his soldiers were ultimately benign and honest, heightening the deviance of Macbeth (who seems genuinely surprised by the witches' prophecies) and Lady Macbeth in plotting to kill the king. The play said little about politics, instead powerfully presenting its central characters' psychological collapse. Macbeth returned to the RSC in 2018, when Christopher Eccleston played the title role, with Niamh Cusack as his wife, Lady Macbeth. The play later transferred to the Barbican in London. In Soviet-controlled Prague in 1977, faced with the |
known as "straight edge," "is not a set of rules; I'm not telling you what to do. All I'm saying is there are three things, that are like so important to the whole world that I don't happen to find much importance in, whether it's fucking, or whether it's playing golf, because of that, I feel... I can't keep up... (full chorus)". Minor Threat's song "Guilty of Being White" led to some accusations of racism, but MacKaye has strongly denied such intentions and said that some listeners misinterpreted his words. He claims that his experiences attending Wilson High School, whose student population was 70 percent black, inspired the song. There, many students bullied MacKaye and his friends. In an interview, MacKaye stated that he was offended that some perceived racist overtones in the lyrics, saying, "To me, at the time and now, it seemed clear it's an anti-racist song. Of course, it didn't occur to me at the time I wrote it that anybody outside of my twenty or thirty friends who I was singing to would ever have to actually ponder the lyrics or even consider them." Thrash metal band Slayer later covered the song, with the last iteration of the lyric "guilty of being white" changed to "guilty of being right." Hiatus In the time between the release of the band's second seven-inch EP and the Out of Step record, the band briefly split when guitarist Lyle Preslar moved to Illinois to attend college for a semester at Northwestern University. Preslar was a member of Big Black for a few tempestuous rehearsals. During that period, MacKaye and Nelson put together a studio-only project called Skewbald/Grand Union; in a reflection of the slowly increasing disagreements between the two musicians, they were unable to decide on one name. The group recorded three untitled songs, which would be released posthumously as Dischord's 50th release. During Minor Threat's inactive period, Brian Baker also briefly played guitar for Government Issue and appeared on the Make an Effort EP. In March 1982, at the urging of Bad Brains' H.R., Preslar left college to reform Minor Threat. The reunited band featured an expanded lineup: Steve Hansgen joined as the band's bassist and Baker switched to second guitar. Some in Minor Threat, particularly drummer Jeff Nelson, took exception to what they saw as MacKaye's imperious attitude on the song "Out of Step." When the song was re-recorded for the LP Out of Step, MacKaye clearly sang "I don't drink/smoke/fuck" (as was the intent of his words all along). The band also inserted an overdubbed spoken section into the instrumental break before the last chorus with MacKaye stating, "This is not a set of rules, I'm not telling you what to do..." Recording engineer Don Zientara had inadvertently recorded an argument between drummer Nelson and lyricist/singer MacKaye that captured the message perfectly, so this was used. According to Mark Andersen and Mark Jenkins' Dance of Days: Two Decades of Punk in the Nation's Capital, this argument was over exactly what would be said in the message that Nelson wanted MacKaye to record, stating essentially what he said without knowing it was being recorded. An ideological door had already been opened, however, and by 1983, some straight-edge punks, such as followers of the band SS Decontrol, were swatting beers out of people's hands at clubs. Breakup Minor Threat broke up in 1983. A contributing factor was disagreement over musical direction. MacKaye was allegedly skipping rehearsal sessions towards the end of the band's career, and he wrote the lyrics to the songs on the Salad Days EP in the studio. That was quite a contrast with the earlier recordings, as he had written and co-written the music for much of the band's early material. Minor Threat, which had returned to being a four-piece group with the departure of Hansgen, played its final show on September 23, 1983, at the Lansburgh Cultural Center in Washington, D.C., sharing the bill with go-go band Trouble Funk, and Austin, Texas punk funk act the Big Boys. In a meaningful way, Minor Threat ended their final set with "Last Song", a tune whose name was also the original title of the band's song "Salad Days". Following the breakup, MacKaye stated that he did not "check out" on hardcore, but in fact hardcore "checked out". Explaining this, he stated that at a 1984 Minutemen show, a fan struck MacKaye's younger brother Alec in the face, and he punched the fan back, then realizing that the violence was "stupid," and that he saw his role in the stupidity. MacKaye claimed that immediately after this he decided to leave the hardcore scene. Subsequent activities In March 1984, six months after the band broke up, the EPs Minor Threat and In My Eyes were compiled together and re-released as the Minor Threat album. The Complete Discography archival compilation would follow in 1989, with the additional release of First Demo Tape in 2003. Two previously unreleased songs were featured on the 20 Years of Dischord compilation in 2002. MacKaye went on to found Embrace with former members of the Faith, Egg Hunt with Jeff Nelson, and later Fugazi, the Evens, and Coriky, as well as collaborating on Pailhead. Baker went on to play in Junkyard, the Meatmen, Dag Nasty and Government Issue. Since 1994, Baker has been a member of Bad Religion. Preslar was briefly a member of Glenn Danzig's Samhain, and his playing appears on a few songs on the band's first record. He joined The Meatmen in 1984, along with fellow Minor Threat member Brian Baker. He later ran Caroline Records, signing and working with (among others) Peter Gabriel, Ben Folds, Chemical Brothers, and Idaho, and ran marketing for Sire Records. He graduated from Rutgers University School of Law and lives in New Jersey. Nelson played less-frantic alternative rock with Three and The High-Back Chairs before retiring from live performance. He runs his own label, Adult Swim Records, distributed by Dischord, and is a graphic artist and a political | In March 1984, six months after the band broke up, the EPs Minor Threat and In My Eyes were compiled together and re-released as the Minor Threat album. The Complete Discography archival compilation would follow in 1989, with the additional release of First Demo Tape in 2003. Two previously unreleased songs were featured on the 20 Years of Dischord compilation in 2002. MacKaye went on to found Embrace with former members of the Faith, Egg Hunt with Jeff Nelson, and later Fugazi, the Evens, and Coriky, as well as collaborating on Pailhead. Baker went on to play in Junkyard, the Meatmen, Dag Nasty and Government Issue. Since 1994, Baker has been a member of Bad Religion. Preslar was briefly a member of Glenn Danzig's Samhain, and his playing appears on a few songs on the band's first record. He joined The Meatmen in 1984, along with fellow Minor Threat member Brian Baker. He later ran Caroline Records, signing and working with (among others) Peter Gabriel, Ben Folds, Chemical Brothers, and Idaho, and ran marketing for Sire Records. He graduated from Rutgers University School of Law and lives in New Jersey. Nelson played less-frantic alternative rock with Three and The High-Back Chairs before retiring from live performance. He runs his own label, Adult Swim Records, distributed by Dischord, and is a graphic artist and a political activist in Toledo, Ohio. The band's own Dischord Records released material by many bands from the Washington, D.C., area, such as Government Issue, Void, Scream, Fugazi, Artificial Peace, Rites of Spring, Gray Matter, and Dag Nasty, and has become a respected independent record label. Hansgen formed Second Wind with Rich Moore, a former Minor Threat roadie and drummer for the Untouchables. In 1992, he worked as a producer on the first Tool EP Opiate. Copyright issues "Major Threat" In 2005, a mock-up of the cover of Minor Threat's first EP (also used on the Minor Threat LP and Complete Discography CD) was copied by athletic footwear manufacturer Nike for use on a promotional poster for a skateboarding tour called "Major Threat". Nike also altered Minor Threat's logo (designed by Jeff Nelson) for the same campaign, as well as featuring Nike shoes in the new picture, rather than the combat boots worn by Ian MacKaye's younger brother Alec on the original. MacKaye issued a press statement condemning Nike's actions and said that he would discuss legal options with the other members of the band. Meanwhile, fans, at the encouragement of Dischord, organized a letter-writing campaign protesting Nike's infringement. On June 27, 2005, Nike issued a statement apologizing to Minor Threat, Dischord Records, and their fans for the "Major Threat" campaign and said that all promotional artwork (print and digital) that they could acquire were destroyed. "Salad Days" On October 29, 2005, Fox played the first few seconds of Minor Threat's "Salad Days" during an NFL broadcast. Use of the song was not cleared by Dischord Records or any of the members of Minor Threat. Fox claimed that the clip was too short to have violated any copyrights. Wheelhouse Pickles In 2007, Brooklyn-based company Wheelhouse Pickles marketed a pepper sauce named "Minor Threat Sauce". Requesting only that the original label design (which was based on the "Bottled Violence" artwork) be amended, Ian MacKaye gave the product his endorsement. A small mention of this was made, where MacKaye commented “I don't have an occasion to eat a lot of hot sauce, but I also thought the Minor Threat stuff was nice.” Urban Outfitters In 2013, Minor Threat shirts began appearing in Urban Outfitters stores. Ian MacKaye confirmed that the shirts were officially licensed. Having spent what he described as "a complete waste of time" trying to track down bootlegged Minor Threat merchandise, MacKaye and Dischord made arrangements with a merchandise company in California to manage licensing of the band's shirts, as well as working to ensure that bootleg manufacturers of the shirts were curtailed. In comments that appeared in Rolling Stone, MacKaye called it "absurd" for the shirts to be sold for $28 but concluded that "my time is better spent doing other things" than dealing with shirts. Dischord had previously taken action against Forever 21 in 2009 for marketing unlicensed Minor Threat shirts. Members Ian MacKaye – lead vocals (1980–1983) Lyle Preslar – guitar (1980–1983) Brian Baker – bass (1980–1982, 1983); guitar (1982–1983) Jeff Nelson – drums (1980–1983) Steve Hansgen – bass (1982–1983) Discography Original material Minor Threat (EP, 1981) In My Eyes (EP, 1981) Out of Step (studio album, 1983) Salad Days (EP, |
which is physical. The existence of mental events has been used by philosophers as an argument against physicalism. For example, in his 1974 paper What Is it Like to Be a Bat?, Thomas Nagel argues that physicalist theories of mind cannot explain an organism's subjective experience because they cannot account for its mental events. Examples Mary is walking through a park and she sees and recognizes City Hall. This instance of seeing and recognizing City Hall is an instance of perception—something that happens in Mary's mind. That instance of perception is a mental event. It is an event because it is something that happens, and it is mental because it happens in Mary's mind. Mary feels happy after doing well on an exam and she smiles. This thought is a mental event. The smile is a physical event. A killer whale recognized a feeling of hunger. It eats a fish. The recognition of the feeling of hunger is a mental event. Eating the fish is | physical are the very same property which cause any event(s). This view is known as substance monism. An opposing view is substance dualism, which claims that the mental and physical are fundamentally different and can exist independently. A third approach is Donald Davidson's anomalous monism. The Philosophy of Action states that every action is caused by prior thoughts or feelings, and understanding those mental events would in turn explain behavior. Physicalism, a form of substance monism, states that everything that exists is either physical or depends on that which is physical. The existence of mental events has been used by philosophers as an argument against physicalism. For example, in his 1974 paper What Is it Like to Be a Bat?, |
practical monopolies: gas supply, water supply, roads, canals, and railways. In his Social Economics, Friedrich von Wieser demonstrated his view of the postal service as a natural monopoly: "In the face of [such] single-unit administration, the principle of competition becomes utterly abortive. The parallel network of another postal organization, beside the one already functioning, would be economically absurd; enormous amounts of money for plant and management would have to be expended for no purpose whatever." Overall, most monopolies are man-made monopolies, or unnatural monopolies, not natural ones. Government-granted monopoly A government-granted monopoly (also called a "de jure monopoly") is a form of coercive monopoly, in which a government grants exclusive privilege to a private individual or company to be the sole provider of a commodity. Monopoly may be granted explicitly, as when potential competitors are excluded from the market by a specific law, or implicitly, such as when the requirements of an administrative regulation can only be fulfilled by a single market player, or through some other legal or procedural mechanism, such as patents, trademarks, and copyright. These monopolies can also be the result of "rent-seeking" behavior, where firms will try to get the prize of having a monopoly, and the increase of profits in acquiring one from a competitive market in their sector. Monopolist shutdown rule A monopolist should shut down when price is less than average variable cost for every output level – in other words where the demand curve is entirely below the average variable cost curve. Under these circumstances at the profit maximum level of output (MR = MC) average revenue would be less than average variable costs and the monopolists would be better off shutting down in the short term. Breaking up monopolies In an unregulated market, monopolies can potentially be ended by new competition, breakaway businesses, or consumers seeking alternatives. In a regulated market, a government will often either regulate the monopoly, convert it into a publicly owned monopoly environment, or forcibly fragment it (see Antitrust law and trust busting). Public utilities, often being naturally efficient with only one operator and therefore less susceptible to efficient breakup, are often strongly regulated or publicly owned. American Telephone & Telegraph (AT&T) and Standard Oil are often cited as examples of the breakup of a private monopoly by government. The Bell System, later AT&T, was protected from competition first by the Kingsbury Commitment, and later by a series of agreements between AT&T and the Federal Government. In 1984, decades after having been granted monopoly power by force of law, AT&T was broken up into various components, MCI, Sprint, who were able to compete effectively in the long-distance phone market. These breakups are due to the presence of deadweight loss and inefficiency in a monopolistic market, causing the Government to intervene on behalf of consumers and society in order to incite competition. While the sentiment among regulators and judges has generally recommended that breakups are not as remedies for antitrust enforcement, recent scholarship has found that this hostility to breakups by administrators is largely unwarranted. In fact, some scholars have argued breakups, even if incorrectly targeted, could arguably still encourage collaboration, innovation, and efficiency. Law The law regulating dominance in the European Union is governed by Article 102 of the Treaty on the Functioning of the European Union which aims at enhancing the consumer's welfare and also the efficiency of allocation of resources by protecting competition on the downstream market. The existence of a very high market share does not always mean consumers are paying excessive prices since the threat of new entrants to the market can restrain a high-market-share company's price increases. Competition law does not make merely having a monopoly illegal, but rather abusing the power a monopoly may confer, for instance through exclusionary practices (i.e. pricing high just because it is the only one around.) It may also be noted that it is illegal to try to obtain a monopoly, by practices of buying out the competition, or equal practices. If one occurs naturally, such as a competitor going out of business, or lack of competition, it is not illegal until such time as the monopoly holder abuses the power. Establishing dominance First it is necessary to determine whether a company is dominant, or whether it behaves "to an appreciable extent independently of its competitors, customers and ultimately of its consumer". Establishing dominance is a two-stage test. The first thing to consider is market definition which is one of the crucial factors of the test. It includes relevant product market and relevant geographic market. Relevant product market As the definition of the market is of a matter of interchangeability, if the goods or services are regarded as interchangeable then they are within the same product market. For example, in the case of United Brands v Commission, it was argued in this case that bananas and other fresh fruit were in the same product market and later on dominance was found because the special features of the banana made it could only be interchangeable with other fresh fruits in a limited extent and other and is only exposed to their competition in a way that is hardly perceptible. The demand substitutability of the goods and services will help in defining the product market and it can be access by the ‘hypothetical monopolist’ test or the ‘SSNIP’ test . Relevant geographic market It is necessary to define it because some goods can only be supplied within a narrow area due to technical, practical or legal reasons and this may help to indicate which undertakings impose a competitive constraint on the other undertakings in question. Since some goods are too expensive to transport where it might not be economic to sell them to distant markets in relation to their value, therefore the cost of transporting is a crucial factor here. Other factors might be legal controls which restricts an undertaking in a Member States from exporting goods or services to another. Market definition may be difficult to measure but is important because if it is defined too broadly, the undertaking may be more likely to be found dominant and if it is defined too narrowly, the less likely that it will be found dominant. Market shares As with collusive conduct, market shares are determined with reference to the particular market in which the company and product in question is sold. It does not in itself determine whether an undertaking is dominant but work as an indicator of the states of the existing competition within the market. The Herfindahl-Hirschman Index (HHI) is sometimes used to assess how competitive an industry is. It sums up the squares of the individual market shares of all of the competitors within the market. The lower the total, the less concentrated the market and the higher the total, the more concentrated the market. In the US, the merger guidelines state that a post-merger HHI below 1000 is viewed as not concentrated while HHIs above that will provoke further review. By European Union law, very large market shares raise a presumption that a company is dominant, which may be rebuttable. A market share of 100% may be very rare but it is still possible to be found and in fact it has been identified in some cases, for instance the AAMS v Commission case. Undertakings possessing market share that is lower than 100% but over 90% had also been found dominant, for example, Microsoft v Commission case. In the AKZO v Commission case, the undertaking is presumed to be dominant if it has a market share of 50%. There are also findings of dominance that are below a market share of 50%, for instance, United Brands v Commission, it only possessed a market share of 40% to 45% and still to be found dominant with other factors. The lowest yet market share of a company considered "dominant" in the EU was 39.7%.If a company has a dominant position, then there is a special responsibility not to allow its conduct to impair competition on the common market however these will all falls away if it is not dominant. When considering whether an undertaking is dominant, it involves a combination of factors. Each of them cannot be taken separately as if they are, they will not be as determinative as they are when they are combined. Also, in cases where an undertaking has previously been found dominant, it is still necessary to redefine the market and make a whole new analysis of the conditions of competition based on the available evidence at the appropriate time. Other related factors According to the Guidance, there are three more issues that must be examined. They are actual competitors that relates to the market position of the dominant undertaking and its competitors, potential competitors that concerns the expansion and entry and lastly the countervailing buyer power. Actual Competitors Market share may be a valuable source of information regarding the market structure and the market position when it comes to accessing it. The dynamics of the market and the extent to which the goods and services differentiated are relevant in this area. Potential Competitors It concerns with the competition that would come from other undertakings which are not yet operating in the market but will enter it in the future. So, market shares may not be useful in accessing the competitive pressure that is exerted on an undertaking in this area. The potential entry by new firms and expansions by an undertaking must be taken into account, therefore the barriers to entry and barriers to expansion is an important factor here. Countervailing buyer power Competitive constraints may not always come from actual or potential competitors. Sometimes, it may also come from powerful customers who have sufficient bargaining strength which come from its size or its commercial significance for a dominant firm. Types of abuses There are three main types of abuses which are exploitative abuse, exclusionary abuse and single market abuse. Exploitative abuse It arises when a monopolist has such significant market power that it can restrict its output while increasing the price above the competitive level without losing customers. This type is less concerned by the Commission than other types. Exclusionary abuse This is most concerned about by the Commissions because it is capable of causing long- term consumer damage and is more likely to prevent the development of competition. An example of it is exclusive dealing agreements. Single market abuse It arises when a dominant undertaking carrying out excess pricing which would not only have an exploitative effect but also prevent parallel imports and limits intra- brand competition. Examples of abuses Limiting supply Predatory pricing or undercutting Price discrimination Refusal to deal and exclusive dealing Tying (commerce) and product bundling Despite wide agreement that the above constitute abusive practices, there is some debate about whether there needs to be a causal connection between the dominant position of a company and its actual abusive conduct. Furthermore, there has been some consideration of what happens when a company merely attempts to abuse its dominant position. To provide a more specific example, economic and philosophical scholar Adam Smith cites that trade to the East India Company has, for the most part, been subjected to an exclusive company such as that of the English or Dutch. Monopolies such as these are generally established against the nation in which they arose out of. The profound economist goes on to state how there are two types of monopolies. The first type of monopoly is one which tends to always attract to the particular trade where the monopoly was conceived, a greater proportion of the stock of the society than what would go to that trade originally. The second type of monopoly tends to occasionally attract stock towards the particular trade where it was conceived, and sometimes repel it from that trade depending on varying circumstances. Rich countries tended to repel while poorer countries were attracted to this. For example, The Dutch company would dispose of any excess goods not taken to the market in order to preserve their monopoly while the English sold more goods for better prices. Both of these tendencies were extremely destructive as can be seen in Adam Smith's writings. Historical monopolies Origin The term "monopoly" first appears in Aristotle's Politics. Aristotle describes Thales of Miletus's cornering of the market in olive presses as a monopoly (μονοπώλιον). Another early reference to the concept of "monopoly" in a commercial sense appears in tractate Demai of the Mishna (2nd century C.E.), regarding the purchasing of agricultural goods from a dealer who has a monopoly on the produce (chapter 5; 4). The meaning and understanding of the English word 'monopoly' has changed over the years. Monopolies of resources Salt Vending of common salt (sodium chloride) was historically a natural monopoly. Until recently, a combination of strong sunshine and low humidity or an extension of peat marshes was necessary for producing salt from the sea, the most plentiful source. Changing sea levels periodically caused salt "famines" and communities were forced to depend upon those who controlled the scarce inland mines and salt springs, which were often in hostile areas (e.g. the Sahara desert) requiring well-organised security for transport, storage, and distribution. The Salt Commission was a legal monopoly in China. Formed in 758, the Commission controlled salt production and sales in order to raise tax revenue for the Tang Dynasty. The "Gabelle" was a notoriously high tax levied upon salt in the Kingdom of France. The much-hated levy had a role in the beginning of the French Revolution, when strict legal controls specified who was allowed to sell and distribute salt. First instituted in 1286, the Gabelle was not permanently abolished until 1945. Coal Robin Gollan argues in The Coalminers of New South Wales that anti-competitive practices developed in the coal industry of Australia's Newcastle as a result of the business cycle. The monopoly was generated by formal meetings of the local management of coal companies agreeing to fix a minimum price for sale at dock. This collusion was known as "The Vend". The Vend ended and was reformed repeatedly during the late 19th century, ending by recession in the business cycle. "The Vend" was able to maintain its monopoly due to trade union assistance, and material advantages (primarily coal geography). During the early 20th century, as a result of comparable monopolistic practices in the Australian coastal shipping business, the Vend developed as an informal and illegal collusion between the steamship owners and the coal industry, eventually resulting in the High Court case Adelaide Steamship Co. Ltd v. | are three forms of price discrimination. First degree price discrimination charges each consumer the maximum price the consumer is willing to pay. Second degree price discrimination involves quantity discounts. Third degree price discrimination involves grouping consumers according to willingness to pay as measured by their price elasticities of demand and charging each group a different price. Third degree price discrimination is the most prevalent type. There are three conditions that must be present for a company to engage in successful price discrimination. First, the company must have market power. Second, the company must be able to sort customers according to their willingness to pay for the good. Third, the firm must be able to prevent resell. A company must have some degree of market power to practice price discrimination. Without market power a company cannot charge more than the market price. Any market structure characterized by a downward sloping demand curve has market power – monopoly, monopolistic competition and oligopoly. The only market structure that has no market power is perfect competition. A company wishing to practice price discrimination must be able to prevent middlemen or brokers from acquiring the consumer surplus for themselves. The company accomplishes this by preventing or limiting resale. Many methods are used to prevent resale. For instance, persons are required to show photographic identification and a boarding pass before boarding an airplane. Most travelers assume that this practice is strictly a matter of security. However, a primary purpose in requesting photographic identification is to confirm that the ticket purchaser is the person about to board the airplane and not someone who has repurchased the ticket from a discount buyer. The inability to prevent resale is the largest obstacle to successful price discrimination. Companies have however developed numerous methods to prevent resale. For example, universities require that students show identification before entering sporting events. Governments may make it illegal to resell tickets or products. In Boston, Red Sox baseball tickets can only be resold legally to the team. The three basic forms of price discrimination are first, second and third degree price discrimination. In first degree price discrimination the company charges the maximum price each customer is willing to pay. The maximum price a consumer is willing to pay for a unit of the good is the reservation price. Thus for each unit the seller tries to set the price equal to the consumer's reservation price. Direct information about a consumer's willingness to pay is rarely available. Sellers tend to rely on secondary information such as where a person lives (postal codes); for example, catalog retailers can use mail high-priced catalogs to high-income postal codes. First degree price discrimination most frequently occurs in regard to professional services or in transactions involving direct buyer-seller negotiations. For example, an accountant who has prepared a consumer's tax return has information that can be used to charge customers based on an estimate of their ability to pay. In second degree price discrimination or quantity discrimination customers are charged different prices based on how much they buy. There is a single price schedule for all consumers but the prices vary depending on the quantity of the good bought. The theory of second degree price discrimination is a consumer is willing to buy only a certain quantity of a good at a given price. Companies know that consumer's willingness to buy decreases as more units are purchased. The task for the seller is to identify these price points and to reduce the price once one is reached in the hope that a reduced price will trigger additional purchases from the consumer. For example, sell in unit blocks rather than individual units. In third degree price discrimination or multi-market price discrimination the seller divides the consumers into different groups according to their willingness to pay as measured by their price elasticity of demand. Each group of consumers effectively becomes a separate market with its own demand curve and marginal revenue curve. The firm then attempts to maximize profits in each segment by equating MR and MC, Generally the company charges a higher price to the group with a more price inelastic demand and a relatively lesser price to the group with a more elastic demand. Examples of third degree price discrimination abound. Airlines charge higher prices to business travelers than to vacation travelers. The reasoning is that the demand curve for a vacation traveler is relatively elastic while the demand curve for a business traveler is relatively inelastic. Any determinant of price elasticity of demand can be used to segment markets. For example, seniors have a more elastic demand for movies than do young adults because they generally have more free time. Thus theaters will offer discount tickets to seniors. Example Assume that by a uniform pricing system the monopolist would sell five units at a price of $10 per unit. Assume that his marginal cost is $5 per unit. Total revenue would be $50, total costs would be $25 and profits would be $25. If the monopolist practiced price discrimination he would sell the first unit for $50 the second unit for $40 and so on. Total revenue would be $150, his total cost would be $25 and his profit would be $125.00. Several things are worth noting. The monopolist acquires all the consumer surplus and eliminates practically all the deadweight loss because he is willing to sell to anyone who is willing to pay at least the marginal cost. Thus the price discrimination promotes efficiency. Secondly, by the pricing scheme price = average revenue and equals marginal revenue. That is the monopolist behaving like a perfectly competitive company. Thirdly, the discriminating monopolist produces a larger quantity than the monopolist operating by a uniform pricing scheme. Classifying customers Successful price discrimination requires that companies separate consumers according to their willingness to buy. Determining a customer's willingness to buy a good is difficult. Asking consumers directly is fruitless: consumers don't know, and to the extent they do they are reluctant to share that information with marketers. The two main methods for determining willingness to buy are observation of personal characteristics and consumer actions. As noted information about where a person lives (postal codes), how the person dresses, what kind of car he or she drives, occupation, and income and spending patterns can be helpful in classifying. Monopoly and efficiency According to the standard model, in which a monopolist sets a single price for all consumers, the monopolist will sell a lesser quantity of goods at a higher price than would companies by perfect competition. Because the monopolist ultimately forgoes transactions with consumers who value the product or service more than its price, monopoly pricing creates a deadweight loss referring to potential gains that went neither to the monopolist nor to consumers. Deadweight loss is the cost to society because the market isn't in equilibrium, it is inefficient. Given the presence of this deadweight loss, the combined surplus (or wealth) for the monopolist and consumers is necessarily less than the total surplus obtained by consumers by perfect competition. Where efficiency is defined by the total gains from trade, the monopoly setting is less efficient than perfect competition. It is often argued that monopolies tend to become less efficient and less innovative over time, becoming "complacent", because they do not have to be efficient or innovative to compete in the marketplace. Sometimes this very loss of psychological efficiency can increase a potential competitor's value enough to overcome market entry barriers, or provide incentive for research and investment into new alternatives. The theory of contestable markets argues that in some circumstances (private) monopolies are forced to behave as if there were competition because of the risk of losing their monopoly to new entrants. This is likely to happen when a market's barriers to entry are low. It might also be because of the availability in the longer term of substitutes in other markets. For example, a canal monopoly, while worth a great deal during the late 18th century United Kingdom, was worth much less during the late 19th century because of the introduction of railways as a substitute. Contrary to common misconception, monopolists do not try to sell items for the highest possible price, nor do they try to maximize profit per unit, but rather they try to maximize total profit. Natural monopoly A natural monopoly is an organization that experiences increasing returns to scale over the relevant range of output and relatively high fixed costs. A natural monopoly occurs where the average cost of production "declines throughout the relevant range of product demand". The relevant range of product demand is where the average cost curve is below the demand curve. When this situation occurs, it is always more efficient for one large company to supply the market than multiple smaller companies; in fact, absent government intervention in such markets, will naturally evolve into a monopoly. Often, a natural monopoly is the outcome of an initial rivalry between several competitors. An early market entrant that takes advantage of the cost structure and can expand rapidly can exclude smaller companies from entering and can drive or buy out other companies. A natural monopoly suffers from the same inefficiencies as any other monopoly. Left to its own devices, a profit-seeking natural monopoly will produce where marginal revenue equals marginal costs. Regulation of natural monopolies is problematic. Fragmenting such monopolies is by definition inefficient. The most frequently used methods dealing with natural monopolies are government regulations and public ownership. Government regulation generally consists of regulatory commissions charged with the principal duty of setting prices. Natural monopolies are synonymous with what is called "single-unit enterprise," a term which was used in the 1914 book Social Economics written by Friedrich von Wieser. As mentioned, government regulations are frequently used with natural monopolies to help control prices. An example that can illustrate this can be found when looking at the United States Postal Service, which has a monopoly over types of mail. According to Wieser, the idea of a competitive market within the postal industry would lead to extreme prices and unnecessary spending, and this highlighted why government regulation in the form of price control is necessary as it helped efficient market. To reduce prices and increase output, regulators often use average cost pricing. By average cost pricing, the price and quantity are determined by the intersection of the average cost curve and the demand curve. This pricing scheme eliminates any positive economic profits since price equals average cost. Average-cost pricing is not perfect. Regulators must estimate average costs. Companies have a reduced incentive to lower costs. Regulation of this type has not been limited to natural monopolies. Average-cost pricing does also have some disadvantages. By setting price equal to the intersection of the demand curve and the average total cost curve, the firm's output is allocatively inefficient as the price is less than the marginal cost (which is the output quantity for a perfectly competitive and allocatively efficient market). In 1848, J.S. Mill was the first individual to describe monopolies with the adjective "natural". He used it interchangeably with "practical". At the time, Mill gave the following examples of natural or practical monopolies: gas supply, water supply, roads, canals, and railways. In his Social Economics, Friedrich von Wieser demonstrated his view of the postal service as a natural monopoly: "In the face of [such] single-unit administration, the principle of competition becomes utterly abortive. The parallel network of another postal organization, beside the one already functioning, would be economically absurd; enormous amounts of money for plant and management would have to be expended for no purpose whatever." Overall, most monopolies are man-made monopolies, or unnatural monopolies, not natural ones. Government-granted monopoly A government-granted monopoly (also called a "de jure monopoly") is a form of coercive monopoly, in which a government grants exclusive privilege to a private individual or company to be the sole provider of a commodity. Monopoly may be granted explicitly, as when potential competitors are excluded from the market by a specific law, or implicitly, such as when the requirements of an administrative regulation can only be fulfilled by a single market player, or through some other legal or procedural mechanism, such as patents, trademarks, and copyright. These monopolies can also be the result of "rent-seeking" behavior, where firms will try to get the prize of having a monopoly, and the increase of profits in acquiring one from a competitive market in their sector. Monopolist shutdown rule A monopolist should shut down when price is less than average variable cost for every output level – in other words where the demand curve is entirely below the average variable cost curve. Under these circumstances at the profit maximum level of output (MR = MC) average revenue would be less than average variable costs and the monopolists would be better off shutting down in the short term. Breaking up monopolies In an unregulated market, monopolies can potentially be ended by new competition, breakaway businesses, or consumers seeking alternatives. In a regulated market, a government will often either regulate the monopoly, convert it into a publicly owned monopoly environment, or forcibly fragment it (see Antitrust law and trust busting). Public utilities, often being naturally efficient with only one operator and therefore less susceptible to efficient breakup, are often strongly regulated or publicly owned. American Telephone & Telegraph (AT&T) and Standard Oil are often cited as examples of the breakup of a private monopoly by government. The Bell System, later AT&T, was protected from competition first by the Kingsbury Commitment, and later by a series of agreements between AT&T and the Federal Government. In 1984, decades after having been granted monopoly power by force of law, AT&T was broken up into various components, MCI, Sprint, who were able to compete effectively in the long-distance phone market. These breakups are due to the presence of deadweight loss and inefficiency in a monopolistic market, causing the Government to intervene on behalf of consumers and society in order to incite competition. While the sentiment among regulators and judges has generally recommended that breakups are not as remedies for antitrust enforcement, recent scholarship has found that this hostility to breakups by administrators is largely unwarranted. In fact, some scholars have argued breakups, even if incorrectly targeted, could arguably still encourage collaboration, innovation, and efficiency. Law The law regulating dominance in the European Union is governed by Article 102 of the Treaty on the Functioning of the European Union which aims at enhancing the consumer's welfare and also the efficiency of allocation of resources by protecting competition on the downstream market. The existence of a very high market share does not always mean consumers are paying excessive prices since the threat of new entrants to the market can restrain a high-market-share company's price increases. Competition law does not make merely having a monopoly illegal, but rather abusing the power a monopoly may confer, for instance through exclusionary practices (i.e. pricing high just because it is the only one around.) It may also be noted that it is illegal to try to obtain a monopoly, by practices of buying out the competition, or equal practices. If one occurs naturally, such as a competitor going out of business, or lack of competition, it is not illegal until such time as the monopoly holder abuses the power. Establishing dominance First it is necessary to determine whether a company is dominant, or whether it behaves "to an appreciable extent independently of its competitors, customers and ultimately of its consumer". Establishing dominance is a two-stage test. The first thing to consider is market definition which is one of the crucial factors of the test. It includes relevant product market and relevant geographic market. Relevant product market As the definition of the market is of a matter of interchangeability, if the goods or services are regarded as interchangeable then they are within the same product market. For example, in the case of United Brands v Commission, it was argued in this case that bananas and other fresh fruit were in the same product market and later on dominance was found because the special features of the banana made it could only be interchangeable with other fresh fruits in a limited extent and other and is only exposed to their competition in a way that is hardly perceptible. The demand substitutability of the goods and services will help in defining the product market and it can be access by the ‘hypothetical monopolist’ test or the ‘SSNIP’ test . Relevant geographic market It is necessary to define it because some goods can only be supplied within a narrow area due to technical, practical or legal reasons and this may help to indicate which undertakings impose a competitive constraint on the other undertakings in question. Since some goods are too expensive to transport where it might not be economic to sell them to distant markets in relation to their value, therefore the cost of transporting is a crucial factor here. Other factors might be legal controls which restricts an undertaking in a Member States from exporting goods or services to another. Market definition may be difficult to measure but is important because if it is defined too broadly, the undertaking may be more likely to be found dominant and if it is defined too narrowly, the less likely that it will be found dominant. Market shares As with collusive conduct, market shares are determined with reference to the particular market in which the company and product in question is sold. It does not in itself determine whether an undertaking is dominant but work as an indicator of the states of the existing competition within the market. The Herfindahl-Hirschman Index (HHI) is sometimes used to assess how competitive an industry is. It sums up the squares of the individual market shares of all of the competitors within the market. The lower the total, the less concentrated the market and the higher the total, the more concentrated the market. In the US, the merger guidelines state that a post-merger HHI below 1000 is viewed as not concentrated while HHIs above that will provoke further review. By European Union law, very large market shares raise a presumption that a company is dominant, which may be rebuttable. A market share of 100% may be very rare but it is still possible to be found and in fact it has been identified in some cases, for instance the AAMS v Commission case. Undertakings possessing market share that is lower than 100% but over 90% had also been found dominant, for example, Microsoft v Commission case. In the AKZO v Commission case, the undertaking is |
profoundly. A 1949 report noted the lack of "any great slackening in the pace of life at the Institute" to match the return to peacetime, remembering the "academic tranquility of the prewar years", though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of MIT between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, MIT no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government. In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and MIT's defense research. In this period MIT's various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. MIT ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to "greater strength and unity" after these times of turmoil. However six MIT students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT's role in military research and its suppression of these protests. (Richard Leacock's film, November Actions, records some of these tumultuous events.) In the 1980s, there was more controversy at MIT over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, MIT's research for the military has included work on robots, drones and 'battle suits'. Recent history MIT has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman's GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the OpenCourseWare project has made course materials for over 2,000 MIT classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005. MIT was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new "backlot" buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School's eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption. In 2001, inspired by the open source and open access movements, MIT launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, MIT announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its "MITx" program, for a modest fee. The "edX" online platform supporting MITx was initially developed in partnership with Harvard and its analogous "Harvardx" initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the MIT faculty adopted an open-access policy to make its scholarship publicly accessible online. MIT has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier's memorial service was attended by more than 10,000 people, in a ceremony hosted by the MIT community with thousands of police officers from the New England region and Canada. On November 25, 2013, MIT announced the creation of the Collier Medal, to be awarded annually to "an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the MIT community and in all aspects of his life". The announcement further stated that "Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness". In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion. The Laser Interferometer Gravitational-Wave Observatory (LIGO) was designed and constructed by a team of scientists from California Institute of Technology, MIT, and industrial contractors, and funded by the National Science Foundation. It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and MIT physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an MIT graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO. In 2021, MIT researchers in the field of Computer Science and Artificial Intelligence developed an AI system that makes robots better at handling objects. The simulated, anthropomorphic hand created could manipulate more than 2,000 objects. And the system didn’t need to know what it was about to pick up to find a way to move it around in its hand. Campus MIT's campus in the city of Cambridge spans approximately a mile along the north side of the Charles River basin. The campus is divided roughly in half by Massachusetts Avenue, with most dormitories and student life facilities to the west and most academic buildings to the east. The bridge closest to MIT is the Harvard Bridge, which is known for being marked off in a non-standard unit of length – the smoot. The Kendall/MIT MBTA Red Line station is located on the northeastern edge of the campus, in Kendall Square. The Cambridge neighborhoods surrounding MIT are a mixture of high tech companies occupying both modern office and rehabilitated industrial buildings, as well as socio-economically diverse residential neighborhoods. In early 2016, MIT presented its updated Kendall Square Initiative to the City of Cambridge, with plans for mixed-use educational, retail, residential, startup incubator, and office space in a dense high-rise transit-oriented development plan. The MIT Museum will eventually be moved immediately adjacent to a Kendall Square subway entrance, joining the List Visual Arts Center on the eastern end of the campus. Each building at MIT has a number (possibly preceded by a W, N, E, or NW) designation, and most have a name as well. Typically, academic and office buildings are referred to primarily by number while residence halls are referred to by name. The organization of building numbers roughly corresponds to the order in which the buildings were built and their location relative (north, west, and east) to the original center cluster of Maclaurin buildings. Many of the buildings are connected above ground as well as through an extensive network of tunnels, providing protection from the Cambridge weather as well as a venue for roof and tunnel hacking. MIT's on-campus nuclear reactor is one of the most powerful university-based nuclear reactors in the United States. The prominence of the reactor's containment building in a densely populated area has been controversial, but MIT maintains that it is well-secured. In 1999 Bill Gates donated US$20 million to MIT for the construction of a computer laboratory named the "William H. Gates Building", and designed by architect Frank Gehry. While Microsoft had previously given financial support to the institution, this was the first personal donation received from Gates. MIT Nano, also known as Building 12, is an interdisciplinary facility for nanoscale research. Its cleanroom and research space, visible through expansive glass facades, is the largest research facility of its kind in the nation. With a cost of US$400 million, it is also one of the costliest buildings on campus. The facility also provides state-of-the-art nanoimaging capabilities with vibration damped imaging and metrology suites sitting atop a slab of concrete underground. Other notable campus facilities include a pressurized wind tunnel for testing aerodynamic research, a towing tank for testing ship and ocean structure designs, and previously Alcator C-Mod, which was the largest fusion device operated by any university. MIT's campus-wide wireless network was completed in the fall of 2005 and consists of nearly 3,000 access points covering of campus. In 2001, the Environmental Protection Agency sued MIT for violating the Clean Water Act and the Clean Air Act with regard to its hazardous waste storage and disposal procedures. MIT settled the suit by paying a $155,000 fine and launching three environmental projects. In connection with capital campaigns to expand the campus, the Institute has also extensively renovated existing buildings to improve their energy efficiency. MIT has also taken steps to reduce its environmental impact by running alternative fuel campus shuttles, subsidizing public transportation passes, and building a low-emission cogeneration plant that serves most of the campus electricity, heating, and cooling requirements. MIT has substantial commercial real estate holdings in Cambridge on which it pays property taxes, plus an additional voluntary payment in lieu of taxes (PILOT) on academic buildings which are legally tax-exempt. , it is the largest taxpayer in the city, contributing approximately 14% of the city's annual revenues. Holdings include Technology Square, parts of Kendall Square, and many properties in Cambridgeport and Area 4 neighboring the educational buildings. The land is held for investment purposes and potential long-term expansion. Architecture MIT's School of Architecture, now the School of Architecture and Planning, was the first formal architecture program in the United States, and it has a history of commissioning progressive buildings. The first buildings constructed on the Cambridge campus, completed in 1916, are sometimes called the "Maclaurin buildings" after Institute president Richard Maclaurin who oversaw their construction. Designed by William Welles Bosworth, these imposing buildings were built of reinforced concrete, a first for a non-industrial – much less university – building in the US. Bosworth's design was influenced by the City Beautiful Movement of the early 1900s and features the Pantheon-esque Great Dome housing the Barker Engineering Library. The Great Dome overlooks Killian Court, where graduation ceremonies are held each year. The friezes of the limestone-clad buildings around Killian Court are engraved with the names of important scientists and philosophers. The spacious Building 7 atrium at 77 Massachusetts Avenue is regarded as the entrance to the Infinite Corridor and the rest of the campus. Alvar Aalto's Baker House (1947), Eero Saarinen's MIT Chapel and Kresge Auditorium (1955), and I.M. Pei's Green, Dreyfus, Landau, and Wiesner buildings represent high forms of post-war modernist architecture. More recent buildings like Frank Gehry's Stata Center (2004), Steven Holl's Simmons Hall (2002), Charles Correa's Building 46 (2005), and Fumihiko Maki's Media Lab Extension (2009) stand out among the Boston area's classical architecture and serve as examples of contemporary campus "starchitecture". These buildings have not always been well received; in 2010, The Princeton Review included MIT in a list of twenty schools whose campuses are "tiny, unsightly, or both". Housing Undergraduates are guaranteed four-year housing in one of MIT's 11 undergraduate dormitories. Out of the 11 dormitories, 10 are currently active due to one of the residential halls, Burton Conner, undergoing renovation from 2020 to 2022. Those living on campus can receive support and mentoring from live-in graduate student tutors, resident advisors, and faculty housemasters. Because housing assignments are made based on the preferences of the students themselves, diverse social atmospheres can be sustained in different living groups; for example, according to the Yale Daily News staff's The Insider's Guide to the Colleges, 2010, "The split between East Campus and West Campus is a significant characteristic of MIT. East Campus has gained a reputation as a thriving counterculture." MIT also has 5 dormitories for single graduate students and 2 apartment buildings on campus for married student families. MIT has an active Greek and co-op housing system, including thirty-six fraternities, sororities, and independent living groups (FSILGs). , 98% of all undergraduates lived in MIT-affiliated housing; 54% of the men participated in fraternities and 20% of the women were involved in sororities. Most FSILGs are located across the river in Back Bay near where MIT was founded, and there is also a cluster of fraternities on MIT's West Campus that face the Charles River Basin. After the 1997 alcohol-related death of Scott Krueger, a new pledge at the Phi Gamma Delta fraternity, MIT required all freshmen to live in the dormitory system starting in 2002. Because FSILGs had previously housed as many as 300 freshmen off-campus, the new policy could not be implemented until Simmons Hall opened in that year. In 2013–2014, MIT abruptly closed and then demolished undergrad dorm Bexley Hall, citing extensive water damage that made repairs infeasible. In 2017, MIT shut down Senior House after a century of service as an undergrad dorm. That year, MIT administrators released data showing just 60% of Senior House residents had graduated in four years. Campus-wide, the four-year graduation rate is 84% (the cumulative graduation rate is significantly higher). Organization and administration MIT is chartered as a non-profit organization and is owned and governed by a privately appointed board of trustees known as the MIT Corporation. The current board consists of 43 members elected to five-year terms, 25 life members who vote until their 75th birthday, 3 elected officers (President, Treasurer, and Secretary), and 4 ex officio members (the president of the alumni association, the Governor of Massachusetts, the Massachusetts Secretary of Education, and the Chief Justice of the Massachusetts Supreme Judicial Court). The board is chaired by Diane Greene SM ’78, co-founder and former CEO of VMware and former CEO of Google Cloud. The Corporation approves the budget, new programs, degrees and faculty appointments, and elects the President to serve as the chief executive officer of the university and preside over the Institute's faculty. MIT's endowment and other financial assets are managed through a subsidiary called MIT Investment Management Company (MITIMCo). Valued at $16.4 billion in 2018, MIT's endowment was then the sixth-largest among American colleges and universities. MIT has five schools (Science, Engineering, Architecture and Planning, Management, and Humanities, Arts, and Social Sciences) and one college (Schwarzman College of Computing), but no schools of law or medicine. While faculty committees assert substantial control over many areas of MIT's curriculum, research, student life, and administrative affairs, the chair of each of MIT's 32 academic departments reports to the dean of that department's school, who in turn reports to the Provost under the President. The current president is L. Rafael Reif, who formerly served as provost under President Susan Hockfield, the first woman to hold the post. Academics MIT is a large, highly residential, research university with a majority of enrollments in graduate and professional programs. The university has been accredited by the New England Association of Schools and Colleges since 1929. MIT operates on a 4–1–4 academic calendar with the fall semester beginning after Labor Day and ending in mid-December, a 4-week "Independent Activities Period" in the month of January, and the spring semester commencing in early February and ceasing in late May. MIT students refer to both their majors and classes using numbers or acronyms alone. Departments and their corresponding majors are numbered in the approximate order of their foundation; for example, Civil and Environmental Engineering is , while Linguistics and Philosophy is . Students majoring in Electrical Engineering and Computer Science (EECS), the most popular department, collectively identify themselves as "Course 6". MIT students use a combination of the department's course number and the number assigned to the class to identify their subjects; for instance, the introductory calculus-based classical mechanics course is simply "8.01" at MIT. Undergraduate program The four-year, full-time undergraduate program maintains a balance between professional majors and those in the arts and sciences, and has been dubbed "most selective" by U.S. News, admitting few transfer students and 4.1% of its applicants in the 2020–2021 admissions cycle. MIT offers 44 undergraduate degrees across its five schools. In the 2017–2018 academic year, 1,045 bachelor of science degrees (abbreviated "SB") were granted, the only type of undergraduate degree MIT now awards. In the 2011 fall term, among students who had designated a major, the School of Engineering was the most popular division, enrolling 63% of students in its 19 degree programs, followed by the School of Science (29%), School of Humanities, Arts, & Social Sciences (3.7%), Sloan School of Management (3.3%), and School of Architecture and Planning (2%). The largest undergraduate degree programs were in Electrical Engineering and Computer Science (), Computer Science and Engineering (), Mechanical Engineering (), Physics (), and Mathematics (). All undergraduates are required to complete a core curriculum called the General Institute Requirements (GIRs). The Science Requirement, generally completed during freshman year as prerequisites for classes in science and engineering majors, comprises two semesters of physics, two semesters of calculus, one semester of chemistry, and one semester of biology. There is a Laboratory Requirement, usually satisfied by an appropriate class in a course major. The Humanities, Arts, and Social Sciences (HASS) Requirement consists of eight semesters of classes in the humanities, arts, and social sciences, including at least one semester from each division as well as the courses required for a designated concentration in a HASS division. Under the Communication Requirement, two of the HASS classes, plus two of the classes taken in the designated major must be "communication-intensive", including "substantial instruction and practice in oral presentation". Finally, all students are required to complete a swimming test; non-varsity athletes must also take four quarters of physical education classes. Most classes rely on a combination of lectures, recitations led by associate professors or graduate students, weekly problem sets ("p-sets"), and periodic quizzes or tests. While the pace and difficulty of MIT coursework has been compared to "drinking from a fire hose", the freshmen retention rate at MIT is similar to other research universities. The "pass/no-record" grading system relieves some pressure for first-year undergraduates. For each class taken in the fall term, freshmen transcripts will either report only that the class was passed, or otherwise not have any record of it. In the spring term, passing grades (A, B, C) appear on the transcript while non-passing grades are again not recorded. (Grading had previously been "pass/no record" all freshman year, but was amended for the Class of 2006 to prevent students from gaming the system by completing required major classes in their freshman year.) Also, freshmen may choose to join alternative learning communities, such as Experimental Study Group, Concourse, or Terrascope. In 1969, Margaret MacVicar founded the Undergraduate Research Opportunities Program (UROP) to enable undergraduates to collaborate directly with faculty members and researchers. Students join or initiate research projects ("UROPs") for academic credit, pay, or on a volunteer basis through postings on the UROP website or by contacting faculty members directly. A substantial majority of undergraduates participate. Students often become published, file patent applications, and/or launch start-up companies based upon their experience in UROPs. In 1970, the then-Dean of Institute Relations, Benson R. Snyder, published The Hidden Curriculum, arguing that education at MIT was often slighted in favor of following a set of unwritten expectations and that graduating with good grades was more often the product of figuring out the system rather than a solid education. The successful student, according to Snyder, was the one who was able to discern which of the formal requirements were to be ignored in favor of which unstated norms. For example, organized student groups had compiled "course bibles"—collections of problem-set and examination questions and answers for later students to use as references. This sort of gamesmanship, Snyder argued, hindered development of a creative intellect and contributed to student discontent and unrest. Graduate program MIT's graduate program has high coexistence with the undergraduate program, and many courses are taken by qualified students at both levels. MIT offers a comprehensive doctoral program with degrees in the humanities, social sciences, and STEM fields as well as professional degrees. The Institute offers graduate programs leading to academic degrees such as the Master of Science (which is abbreviated as SM at MIT), various Engineer's Degrees, Doctor of Philosophy (PhD), and Doctor of Science (ScD) and interdisciplinary graduate programs such as the MD-PhD (with Harvard Medical School) and a joint program in oceanography with Woods Hole Oceanographic Institution. Admission to graduate programs is decentralized; applicants apply directly to the department or degree program. More than 90% of doctoral students are supported by fellowships, research assistantships (RAs), or teaching assistantships (TAs). MIT Bootcamps MIT Bootcamps are intense week-long innovation and leadership programs that challenge participants to develop a venture in a week. Each Bootcamp centers around a particular topic, specific | was elected to the Association of American Universities in 1934. Still, as late as 1949, the Lewis Committee lamented in its report on the state of education at MIT that "the Institute is widely conceived as basically a vocational school", a "partly unjustified" perception the committee sought to change. The report comprehensively reviewed the undergraduate curriculum, recommended offering a broader education, and warned against letting engineering and government-sponsored research detract from the sciences and humanities. The School of Humanities, Arts, and Social Sciences and the MIT Sloan School of Management were formed in 1950 to compete with the powerful Schools of Science and Engineering. Previously marginalized faculties in the areas of economics, management, political science, and linguistics emerged into cohesive and assertive departments by attracting respected professors and launching competitive graduate programs. The School of Humanities, Arts, and Social Sciences continued to develop under the successive terms of the more humanistically oriented presidents Howard W. Johnson and Jerome Wiesner between 1966 and 1980. Defense research MIT's involvement in military science surged during World War II. In 1941, Vannevar Bush was appointed head of the federal Office of Scientific Research and Development and directed funding to only a select group of universities, including MIT. Engineers and scientists from across the country gathered at MIT's Radiation Laboratory, established in 1940 to assist the British military in developing microwave radar. The work done there significantly affected both the war and subsequent research in the area. Other defense projects included gyroscope-based and other complex control systems for gunsight, bombsight, and inertial navigation under Charles Stark Draper's Instrumentation Laboratory; the development of a digital computer for flight simulations under Project Whirlwind; and high-speed and high-altitude photography under Harold Edgerton. By the end of the war, MIT became the nation's largest wartime R&D contractor (attracting some criticism of Bush), employing nearly 4000 in the Radiation Laboratory alone and receiving in excess of $100 million ($ billion in 2015 dollars) before 1946. Work on defense projects continued even after then. Post-war government-sponsored research at MIT included SAGE and guidance systems for ballistic missiles and Project Apollo. These activities affected MIT profoundly. A 1949 report noted the lack of "any great slackening in the pace of life at the Institute" to match the return to peacetime, remembering the "academic tranquility of the prewar years", though acknowledging the significant contributions of military research to the increased emphasis on graduate education and rapid growth of personnel and facilities. The faculty doubled and the graduate student body quintupled during the terms of Karl Taylor Compton, president of MIT between 1930 and 1948; James Rhyne Killian, president from 1948 to 1957; and Julius Adams Stratton, chancellor from 1952 to 1957, whose institution-building strategies shaped the expanding university. By the 1950s, MIT no longer simply benefited the industries with which it had worked for three decades, and it had developed closer working relationships with new patrons, philanthropic foundations and the federal government. In late 1960s and early 1970s, student and faculty activists protested against the Vietnam War and MIT's defense research. In this period MIT's various departments were researching helicopters, smart bombs and counterinsurgency techniques for the war in Vietnam as well as guidance systems for nuclear missiles. The Union of Concerned Scientists was founded on March 4, 1969 during a meeting of faculty members and students seeking to shift the emphasis on military research toward environmental and social problems. MIT ultimately divested itself from the Instrumentation Laboratory and moved all classified research off-campus to the MIT Lincoln Laboratory facility in 1973 in response to the protests. The student body, faculty, and administration remained comparatively unpolarized during what was a tumultuous time for many other universities. Johnson was seen to be highly successful in leading his institution to "greater strength and unity" after these times of turmoil. However six MIT students were sentenced to prison terms at this time and some former student leaders, such as Michael Albert and George Katsiaficas, are still indignant about MIT's role in military research and its suppression of these protests. (Richard Leacock's film, November Actions, records some of these tumultuous events.) In the 1980s, there was more controversy at MIT over its involvement in SDI (space weaponry) and CBW (chemical and biological warfare) research. More recently, MIT's research for the military has included work on robots, drones and 'battle suits'. Recent history MIT has kept pace with and helped to advance the digital age. In addition to developing the predecessors to modern computing and networking technologies, students, staff, and faculty members at Project MAC, the Artificial Intelligence Laboratory, and the Tech Model Railroad Club wrote some of the earliest interactive computer video games like Spacewar! and created much of modern hacker slang and culture. Several major computer-related organizations have originated at MIT since the 1980s: Richard Stallman's GNU Project and the subsequent Free Software Foundation were founded in the mid-1980s at the AI Lab; the MIT Media Lab was founded in 1985 by Nicholas Negroponte and Jerome Wiesner to promote research into novel uses of computer technology; the World Wide Web Consortium standards organization was founded at the Laboratory for Computer Science in 1994 by Tim Berners-Lee; the OpenCourseWare project has made course materials for over 2,000 MIT classes available online free of charge since 2002; and the One Laptop per Child initiative to expand computer education and connectivity to children worldwide was launched in 2005. MIT was named a sea-grant college in 1976 to support its programs in oceanography and marine sciences and was named a space-grant college in 1989 to support its aeronautics and astronautics programs. Despite diminishing government financial support over the past quarter century, MIT launched several successful development campaigns to significantly expand the campus: new dormitories and athletics buildings on west campus; the Tang Center for Management Education; several buildings in the northeast corner of campus supporting research into biology, brain and cognitive sciences, genomics, biotechnology, and cancer research; and a number of new "backlot" buildings on Vassar Street including the Stata Center. Construction on campus in the 2000s included expansions of the Media Lab, the Sloan School's eastern campus, and graduate residences in the northwest. In 2006, President Hockfield launched the MIT Energy Research Council to investigate the interdisciplinary challenges posed by increasing global energy consumption. In 2001, inspired by the open source and open access movements, MIT launched OpenCourseWare to make the lecture notes, problem sets, syllabi, exams, and lectures from the great majority of its courses available online for no charge, though without any formal accreditation for coursework completed. While the cost of supporting and hosting the project is high, OCW expanded in 2005 to include other universities as a part of the OpenCourseWare Consortium, which currently includes more than 250 academic institutions with content available in at least six languages. In 2011, MIT announced it would offer formal certification (but not credits or degrees) to online participants completing coursework in its "MITx" program, for a modest fee. The "edX" online platform supporting MITx was initially developed in partnership with Harvard and its analogous "Harvardx" initiative. The courseware platform is open source, and other universities have already joined and added their own course content. In March 2009 the MIT faculty adopted an open-access policy to make its scholarship publicly accessible online. MIT has its own police force. Three days after the Boston Marathon bombing of April 2013, MIT Police patrol officer Sean Collier was fatally shot by the suspects Dzhokhar and Tamerlan Tsarnaev, setting off a violent manhunt that shut down the campus and much of the Boston metropolitan area for a day. One week later, Collier's memorial service was attended by more than 10,000 people, in a ceremony hosted by the MIT community with thousands of police officers from the New England region and Canada. On November 25, 2013, MIT announced the creation of the Collier Medal, to be awarded annually to "an individual or group that embodies the character and qualities that Officer Collier exhibited as a member of the MIT community and in all aspects of his life". The announcement further stated that "Future recipients of the award will include those whose contributions exceed the boundaries of their profession, those who have contributed to building bridges across the community, and those who consistently and selflessly perform acts of kindness". In September 2017, the school announced the creation of an artificial intelligence research lab called the MIT-IBM Watson AI Lab. IBM will spend $240 million over the next decade, and the lab will be staffed by MIT and IBM scientists. In October 2018 MIT announced that it would open a new Schwarzman College of Computing dedicated to the study of artificial intelligence, named after lead donor and The Blackstone Group CEO Stephen Schwarzman. The focus of the new college is to study not just AI, but interdisciplinary AI education, and how AI can be used in fields as diverse as history and biology. The cost of buildings and new faculty for the new college is expected to be $1 billion upon completion. The Laser Interferometer Gravitational-Wave Observatory (LIGO) was designed and constructed by a team of scientists from California Institute of Technology, MIT, and industrial contractors, and funded by the National Science Foundation. It was designed to open the field of gravitational-wave astronomy through the detection of gravitational waves predicted by general relativity. Gravitational waves were detected for the first time by the LIGO detector in 2015. For contributions to the LIGO detector and the observation of gravitational waves, two Caltech physicists, Kip Thorne and Barry Barish, and MIT physicist Rainer Weiss won the Nobel Prize in physics in 2017. Weiss, who is also an MIT graduate, designed the laser interferometric technique, which served as the essential blueprint for the LIGO. In 2021, MIT researchers in the field of Computer Science and Artificial Intelligence developed an AI system that makes robots better at handling objects. The simulated, anthropomorphic hand created could manipulate more than 2,000 objects. And the system didn’t need to know what it was about to pick up to find a way to move it around in its hand. Campus MIT's campus in the city of Cambridge spans approximately a mile along the north side of the Charles River basin. The campus is divided roughly in half by Massachusetts Avenue, with most dormitories and student life facilities to the west and most academic buildings to the east. The bridge closest to MIT is the Harvard Bridge, which is known for being marked off in a non-standard unit of length – the smoot. The Kendall/MIT MBTA Red Line station is located on the northeastern edge of the campus, in Kendall Square. The Cambridge neighborhoods surrounding MIT are a mixture of high tech companies occupying both modern office and rehabilitated industrial buildings, as well as socio-economically diverse residential neighborhoods. In early 2016, MIT presented its updated Kendall Square Initiative to the City of Cambridge, with plans for mixed-use educational, retail, residential, startup incubator, and office space in a dense high-rise transit-oriented development plan. The MIT Museum will eventually be moved immediately adjacent to a Kendall Square subway entrance, joining the List Visual Arts Center on the eastern end of the campus. Each building at MIT has a number (possibly preceded by a W, N, E, or NW) designation, and most have a name as well. Typically, academic and office buildings are referred to primarily by number while residence halls are referred to by name. The organization of building numbers roughly corresponds to the order in which the buildings were built and their location relative (north, west, and east) to the original center cluster of Maclaurin buildings. Many of the buildings are connected above ground as well as through an extensive network of tunnels, providing protection from the Cambridge weather as well as a venue for roof and tunnel hacking. MIT's on-campus nuclear reactor is one of the most powerful university-based nuclear reactors in the United States. The prominence of the reactor's containment building in a densely populated area has been controversial, but MIT maintains that it is well-secured. In 1999 Bill Gates donated US$20 million to MIT for the construction of a computer laboratory named the "William H. Gates Building", and designed by architect Frank Gehry. While Microsoft had previously given financial support to the institution, this was the first personal donation received from Gates. MIT Nano, also known as Building 12, is an interdisciplinary facility for nanoscale research. Its cleanroom and research space, visible through expansive glass facades, is the largest research facility of its kind in the nation. With a cost of US$400 million, it is also one of the costliest buildings on campus. The facility also provides state-of-the-art nanoimaging capabilities with vibration damped imaging and metrology suites sitting atop a slab of concrete underground. Other notable campus facilities include a pressurized wind tunnel for testing aerodynamic research, a towing tank for testing ship and ocean structure designs, and previously Alcator C-Mod, which was the largest fusion device operated by any university. MIT's campus-wide wireless network was completed in the fall of 2005 and consists of nearly 3,000 access points covering of campus. In 2001, the Environmental Protection Agency sued MIT for violating the Clean Water Act and the Clean Air Act with regard to its hazardous waste storage and disposal procedures. MIT settled the suit by paying a $155,000 fine and launching three environmental projects. In connection with capital campaigns to expand the campus, the Institute has also extensively renovated existing buildings to improve their energy efficiency. MIT has also taken steps to reduce its environmental impact by running alternative fuel campus shuttles, subsidizing public transportation passes, and building a low-emission cogeneration plant that serves most of the campus electricity, heating, and cooling requirements. MIT has substantial commercial real estate holdings in Cambridge on which it pays property taxes, plus an additional voluntary payment in lieu of taxes (PILOT) on academic buildings which are legally tax-exempt. , it is the largest taxpayer in the city, contributing approximately 14% of the city's annual revenues. Holdings include Technology Square, parts of Kendall Square, and many properties in Cambridgeport and Area 4 neighboring the educational buildings. The land is held for investment purposes and potential long-term expansion. Architecture MIT's School of Architecture, now the School of Architecture and Planning, was the first formal architecture program in the United States, and it has a history of commissioning progressive buildings. The first buildings constructed on the Cambridge campus, completed in 1916, are sometimes called the "Maclaurin buildings" after Institute president Richard Maclaurin who oversaw their construction. Designed by William Welles Bosworth, these imposing buildings were built of reinforced concrete, a first for a non-industrial – much less university – building in the US. Bosworth's design was influenced by the City Beautiful Movement of the early 1900s and features the Pantheon-esque Great Dome housing the Barker Engineering Library. The Great Dome overlooks Killian Court, where graduation ceremonies are held each year. The friezes of the limestone-clad buildings around Killian Court are engraved with the names of important scientists and philosophers. The spacious Building 7 atrium at 77 Massachusetts Avenue is regarded as the entrance to the Infinite Corridor and the rest of the campus. Alvar Aalto's Baker House (1947), Eero Saarinen's MIT Chapel and Kresge Auditorium (1955), and I.M. Pei's Green, Dreyfus, Landau, and Wiesner buildings represent high forms of post-war modernist architecture. More recent buildings like Frank Gehry's Stata Center (2004), Steven Holl's Simmons Hall (2002), Charles Correa's Building 46 (2005), and Fumihiko Maki's Media Lab Extension (2009) stand out among the Boston area's classical architecture and serve as examples of contemporary campus "starchitecture". These buildings have not always been well received; in 2010, The Princeton Review included MIT in a list of twenty schools whose campuses are "tiny, unsightly, or both". Housing Undergraduates are guaranteed four-year housing in one of MIT's 11 undergraduate dormitories. Out of the 11 dormitories, 10 are currently active due to one of the residential halls, Burton Conner, undergoing renovation from 2020 to 2022. Those living on campus can receive support and mentoring from live-in graduate student tutors, resident advisors, and faculty housemasters. Because housing assignments are made based on the preferences of the students themselves, diverse social atmospheres can be sustained in different living groups; for example, according to the Yale Daily News staff's The Insider's Guide to the Colleges, 2010, "The split between East Campus and West Campus is a significant characteristic of MIT. East Campus has gained a reputation as a thriving counterculture." MIT also has 5 dormitories for single graduate students and 2 apartment buildings on campus for married student families. MIT has an active Greek and co-op housing system, including thirty-six fraternities, sororities, and independent living groups (FSILGs). , 98% of all undergraduates lived in MIT-affiliated housing; 54% of the men participated in fraternities and 20% of the women were involved in sororities. Most FSILGs are located across the river in Back Bay near where MIT was founded, and there is also a cluster of fraternities on MIT's West Campus that face the Charles River Basin. After the 1997 alcohol-related death of Scott Krueger, a new pledge at the Phi Gamma Delta fraternity, MIT required all freshmen to live in the dormitory system starting in 2002. Because FSILGs had previously housed as many as 300 freshmen off-campus, the new policy could not be implemented until Simmons Hall opened in that year. In 2013–2014, MIT abruptly closed and then demolished undergrad dorm Bexley Hall, citing extensive water damage that made repairs infeasible. In 2017, MIT shut down Senior House after a century of service as an undergrad dorm. That year, MIT administrators released data showing just 60% of Senior House residents had graduated in four years. Campus-wide, the four-year graduation rate is 84% (the cumulative graduation rate is significantly higher). Organization and administration MIT is chartered as a non-profit organization and is owned and governed by a privately appointed board of trustees known as the MIT Corporation. The current board consists of 43 members elected to five-year terms, 25 life members who vote until their 75th birthday, 3 elected officers (President, Treasurer, and Secretary), and 4 ex officio members (the president of the alumni association, the Governor of Massachusetts, the Massachusetts Secretary of Education, and the Chief Justice of the Massachusetts Supreme Judicial Court). The board is chaired by Diane Greene SM ’78, co-founder and former CEO of VMware and former CEO of Google Cloud. The Corporation approves the budget, new programs, degrees and faculty appointments, and elects the President to serve as the chief executive officer of the university and preside over the Institute's faculty. MIT's endowment and other financial assets are managed through a subsidiary called MIT Investment Management Company (MITIMCo). Valued at $16.4 billion in 2018, MIT's endowment was then the sixth-largest among American colleges and universities. MIT has five schools (Science, Engineering, Architecture and Planning, Management, and Humanities, Arts, and Social Sciences) and one college (Schwarzman College of Computing), but no schools of law or medicine. While faculty committees assert substantial control over many areas of MIT's curriculum, research, student life, and administrative affairs, the chair of each of MIT's 32 academic departments reports to the dean of that department's school, who in turn reports to the Provost under the President. The current president is L. Rafael Reif, who formerly served as provost under President Susan Hockfield, the first woman to hold the post. Academics MIT is a large, highly residential, research university with a majority of enrollments in graduate and professional programs. The university has been accredited by the New England Association of Schools and Colleges since 1929. MIT operates on a 4–1–4 academic calendar with the fall semester beginning after Labor Day and ending in mid-December, a 4-week "Independent Activities Period" in the month of January, and the spring semester commencing in early February and ceasing in late May. MIT students refer to both their majors and classes using numbers or acronyms alone. Departments and their corresponding majors are numbered in the approximate order of their foundation; for example, Civil and Environmental Engineering is , while Linguistics and Philosophy is . Students majoring in Electrical Engineering and Computer Science (EECS), the most popular department, collectively identify themselves as "Course 6". MIT students use a combination of the department's course number and the number assigned to the class to identify their subjects; for instance, the introductory calculus-based classical mechanics course is simply "8.01" at MIT. Undergraduate program The four-year, full-time undergraduate program maintains a balance between professional majors and those in the arts and sciences, and has been dubbed "most selective" by U.S. News, admitting few transfer students and 4.1% of its applicants in the 2020–2021 admissions cycle. MIT offers 44 undergraduate degrees across its five schools. In the 2017–2018 academic year, 1,045 bachelor of science degrees (abbreviated "SB") were granted, the only type of undergraduate degree MIT now awards. In the 2011 fall term, among students who had designated a major, the School of Engineering was the most popular division, enrolling 63% of students in its 19 degree programs, followed by the School of Science (29%), School of Humanities, Arts, & Social Sciences (3.7%), Sloan School of Management (3.3%), and School of Architecture and Planning (2%). The largest undergraduate degree programs were in Electrical Engineering and Computer Science (), Computer Science and Engineering (), Mechanical Engineering (), Physics (), and Mathematics (). All undergraduates are required to complete a core curriculum called the General Institute Requirements (GIRs). The Science Requirement, generally completed during freshman year as prerequisites for classes in science and engineering majors, comprises two semesters of physics, two semesters of calculus, one semester of chemistry, and one semester of biology. There is a Laboratory Requirement, usually satisfied by an appropriate class in a course major. The Humanities, Arts, and Social Sciences (HASS) Requirement consists of eight semesters of classes in the humanities, arts, and social sciences, including at least one semester from each division as well as the courses required for a designated concentration in a HASS division. Under the Communication Requirement, two of the HASS classes, plus two of the classes taken in the designated major must be "communication-intensive", including "substantial instruction and practice in oral presentation". Finally, all students are required to complete a swimming test; non-varsity athletes must also take four quarters of physical education classes. Most classes rely on a combination of lectures, recitations led by associate professors or graduate students, weekly problem sets ("p-sets"), and periodic quizzes or tests. While the pace and difficulty of MIT coursework has been compared to "drinking from a fire hose", the freshmen retention rate at MIT is similar to other research universities. The "pass/no-record" grading system relieves some pressure for first-year undergraduates. For each class taken in the fall term, freshmen transcripts will either report only that the class was passed, or otherwise not have any record of it. In the spring term, passing grades (A, B, C) appear on the transcript while non-passing grades are again not recorded. (Grading had previously been "pass/no record" all freshman year, but was amended for the Class of 2006 to prevent students from gaming the system by completing required major classes in their freshman year.) Also, freshmen may choose to join alternative learning communities, such as Experimental Study Group, Concourse, or Terrascope. In 1969, Margaret MacVicar founded the Undergraduate Research Opportunities Program (UROP) to enable undergraduates to collaborate directly with faculty members and researchers. Students join or initiate research projects ("UROPs") for academic credit, pay, or on a volunteer basis through postings on the UROP website or by contacting faculty members directly. A substantial majority of undergraduates participate. Students often become published, file patent applications, and/or launch start-up companies based upon their experience in UROPs. In 1970, the then-Dean of Institute Relations, Benson R. Snyder, published The Hidden Curriculum, arguing that education at MIT was often slighted in favor of following a set of unwritten expectations and that graduating with good grades was more often the product of figuring out the system rather than a solid education. The successful student, according to Snyder, was the one who was able to discern which of the formal requirements were to be ignored in favor of which unstated norms. For example, organized student groups had compiled "course bibles"—collections of problem-set and examination questions and answers for later students to use as references. This sort of gamesmanship, Snyder argued, hindered development of a creative intellect and contributed to student discontent and unrest. Graduate program MIT's graduate program has high coexistence with the undergraduate program, and many courses are taken by qualified students at both levels. MIT offers a comprehensive doctoral program with degrees in the humanities, social sciences, and STEM fields as well as professional degrees. The Institute offers graduate programs leading to academic degrees such as the Master of Science (which is abbreviated as SM at MIT), various Engineer's Degrees, Doctor of Philosophy (PhD), and Doctor of Science (ScD) and interdisciplinary graduate programs such as the MD-PhD (with Harvard Medical School) and a joint program in oceanography with Woods Hole Oceanographic Institution. Admission to graduate programs is decentralized; applicants apply directly to the department or degree program. More than 90% of doctoral students are supported by fellowships, research assistantships (RAs), or teaching assistantships (TAs). MIT Bootcamps MIT Bootcamps are intense week-long innovation and leadership programs that challenge participants to develop a venture in a week. Each Bootcamp centers around a particular topic, specific to an industry, leadership skill set, or emerging technology. Cohorts are organized into small teams who work on an entrepreneurial project together, in addition to individual learning and team coaching. The program includes a series of online seminars with MIT faculty, practitioners, and industry experts, innovation workshops with bootcamp instructors focused on putting the theory participants have learned into practice, coaching sessions, and informal office hours for learners to exchange ideas freely. Bootcampers are tasked with weekly "deliverables," which are key elements of a business plan, to help guide the group through the decision-making process involved in building an enterprise. The experience culminates in a final pitch session, judged by a panel of experts. MIT Bootcamp instructors include Eric von Hippel, Sanjay Sarma, Erdin Beshimov, and Bill Aulet. MIT Bootcamps were founded by |
product group is a "collection of similar products". The fact that there are "many firms" means that each firm has a small market share. This gives each MC firm the freedom to set prices without engaging in strategic decision making regarding the prices of other firms (no mutual independence) and each firm's actions have a negligible impact on the market. For example, a firm could cut prices and increase sales without fear that its actions will prompt retaliatory responses from competitors. How many firms will an MC market structure support at market equilibrium? The answer depends on factors such as fixed costs, economies of scale and the degree of product differentiation. For example, the higher the fixed costs, the fewer firms the market will support. Freedom of entry and exit Like perfect competition, under monopolistic competition also, the firms can enter or exit freely. The firms will enter when the existing firms are making super-normal profits. With the entry of new firms, the supply would increase which would reduce the price and hence the existing firms will be left only with normal profits. Similarly, if the existing firms are sustaining losses, some of the marginal firms will exit. It will reduce the supply due to which price would rise and the existing firms will be left only with normal profit. Independent decision making Each MC firm independently sets the terms of exchange for its product. The firm gives no consideration to what effect its decision may have on competitors. The theory is that any action will have such a negligible effect on the overall market demand that an MC firm can act without fear of prompting heightened competition. In other words, each firm feels free to set prices as if it were a monopoly rather than an oligopoly. Market power MC firms have some degree of market power, although relatively low. Market power means that the firm has control over the terms and conditions of exchange. All MC firms are price makers. An MC firm can raise its prices without losing all its customers. The firm can also lower prices without triggering a potentially ruinous price war with competitors. The source of an MC firm's market power is not barriers to entry since they are low. Rather, an MC firm has market power because it has relatively few competitors, those competitors do not engage in strategic decision making and the firms sells differentiated product. Market power also means that an MC firm faces a downward sloping demand curve. In the long run, the demand curve is highly elastic, meaning that it is sensitive to price changes although it is not completely "flat". In the short run, economic profit is positive, but it approaches zero in the long run. Imperfect information No other sellers or buyers have complete market information, like market demand or market supply. Inefficiency There are two sources of inefficiency in the MC market structure. First, at its optimum output the firm charges a price that exceeds marginal costs, The MC firm maximizes profits where marginal revenue = marginal cost. Since the MC firm's demand curve is downward sloping this means that the firm will be charging a price that exceeds marginal costs. The monopoly power possessed by a MC firm means that at its profit maximizing level of production there will be a net loss of consumer (and producer) surplus. The second source of inefficiency is the fact that MC firms operate with excess capacity. That is, the MC firm's profit maximizing output is less than the output associated with minimum average cost. Both a PC and MC firm will operate at a point where demand or price equals average cost. For a PC firm this equilibrium condition occurs where the perfectly elastic demand curve equals minimum average cost. A MC firm's demand curve is not flat but is downward sloping. Thus in the long run the demand curve will be tangential to the long run average cost curve at a point to the left of its minimum. The result is excess capacity. Socially undesirable aspects compared to perfect competition Selling costs: Producers under monopolistic competition often spend substantial amounts on advertising and publicity. Much of this expenditure is wasteful from the social point of view. The producer can reduce the price of the product instead of spending on publicity. Excess capacity: Under Imperfect competition, the installed capacity of every firm is large, but not fully utilized. Total output is, therefore, less than the output which is socially desirable. Since production capacity is not fully utilized, the resources lie idle. Therefore, the production under monopolistic competition is below the full capacity level. Unemployment: Idle | not have perfect information (Imperfect Information) Product differentiation MC firms sell products that have real or perceived non-price differences. Examples of these differences could include physical aspects of the product, location from which it sells the product or intangible aspects of the product, among others. However, the differences are not so great as to eliminate other goods as substitutes. Technically, the cross price elasticity of demand between goods in such a market is positive. In fact, the XED would be high. MC goods are best described as close but imperfect substitutes. The goods perform the same basic functions but have differences in qualities such as type, style, quality, reputation, appearance, and location that tend to distinguish them from each other. For example, the basic function of motor vehicles is the same—to move people and objects from point to point in reasonable comfort and safety. Yet there are many different types of motor vehicles such as motor scooters, motor cycles, trucks and cars, and many variations even within these categories. Many firms There are many firms in each MC product group and many firms on the side lines prepared to enter the market. A product group is a "collection of similar products". The fact that there are "many firms" means that each firm has a small market share. This gives each MC firm the freedom to set prices without engaging in strategic decision making regarding the prices of other firms (no mutual independence) and each firm's actions have a negligible impact on the market. For example, a firm could cut prices and increase sales without fear that its actions will prompt retaliatory responses from competitors. How many firms will an MC market structure support at market equilibrium? The answer depends on factors such as fixed costs, economies of scale and the degree of product differentiation. For example, the higher the fixed costs, the fewer firms the market will support. Freedom of entry and exit Like perfect competition, under monopolistic competition also, the firms can enter or exit freely. The firms will enter when the existing firms are making super-normal profits. With the entry of new firms, the supply would increase which would reduce the price and hence the existing firms will be left only with normal profits. Similarly, if the existing firms are sustaining losses, some of the marginal firms will exit. It will reduce the supply due to which price would rise and the existing firms will be left only with normal profit. Independent decision making Each MC firm independently sets the terms of exchange for its product. The firm gives no consideration to what effect its decision may have on competitors. The theory is that any action will have such a negligible effect on the overall market demand that an MC firm can act without fear of prompting heightened competition. In other words, each firm feels free to set prices as if it were a monopoly rather than an oligopoly. Market power MC firms have some degree of market power, although relatively low. Market power means that the firm has control over the terms and conditions of exchange. All MC firms are price makers. An MC firm can raise its prices without losing all its customers. The firm can also lower prices without triggering a potentially ruinous price war with competitors. The source of an MC firm's market power is not barriers to entry since they are low. Rather, an MC firm has market power because it has relatively few competitors, those |
induction would correspond to a log-n-step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction. Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding a bounded universal quantifier), so the interesting results relating prefix induction to polynomial-time computation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal and existential quantifiers allowed in the statement. One can take the idea a step further: one must prove whereupon the induction principle "automates" log log n applications of this inference in getting from P(0) to P(n). This form of induction has been used, analogously, to study log-time parallel computation. Complete (strong) induction Another variant, called complete induction, course of values induction or strong induction (in contrast to which the basic form of induction is sometimes known as weak induction), makes the inductive step easier to prove by using a stronger hypothesis: one proves the statement under the assumption that holds for all natural numbers less than ; by contrast, the basic form only assumes . The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the inductive step. In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case, , and it may even be necessary to prove extra-base cases such as before the general argument applies, as in the example below of the Fibonacci number . Although the form just described requires one to prove the base case, this is unnecessary if one can prove (assuming for all lower ) for all . This is a special case of transfinite induction as described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the case , where is proved with no other assumed; this case may need to be handled separately, but sometimes the same argument applies for and , making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof of does not implicitly assume that , e.g. by saying "choose an arbitrary ", or by assuming that a set of m elements has an element. Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof of by complete induction. Let be the statement " holds for all such that ". Then holds for all if and only if holds for all , and our proof of is easily transformed into a proof of by (ordinary) induction. If, on the other hand, had been proven by ordinary induction, the proof would already effectively be one by complete induction: is proved in the base case, using no assumptions, and is proved in the inductive step, in which one may assume all earlier cases but need only use the case . Example: Fibonacci numbers Complete induction is most useful when several instances of the inductive hypothesis are required for each inductive step. For example, complete induction can be used to show that where is the nth Fibonacci number, and (the golden ratio) and are the roots of the polynomial . By using the fact that for each , the identity above can be verified by direct calculation for if one assumes that it already holds for both and . To complete the proof, the identity must be verified in the two base cases: and . Example: prime factorization Another proof by complete induction uses the hypothesis that the statement holds for all smaller more thoroughly. Consider the statement that "every natural number greater than 1 is a product of (one or more) prime numbers", which is the "existence" part of the fundamental theorem of arithmetic. For proving the inductive step, the induction hypothesis is that for a given the statement holds for all smaller . If is prime then it is certainly a product of primes, and if not, then by definition it is a product: , where neither of the factors is equal to 1; hence neither is equal to , and so both are greater than 1 and smaller than . The induction hypothesis now applies to and , so each one is a product of primes. Thus is a product of products of primes, and hence by extension a product of primes itself. Example: dollar amounts revisited We shall look to prove the same example as above, this time with strong induction. The statement remains the same: However, there will be slight differences in the structure and the assumptions of the proof, starting with the extended base case: Base case: Show that holds for . The base case holds. Induction hypothesis: Given some , assume holds for all with . Inductive step: Prove that holds. Choosing , and observing that shows that holds, by the inductive hypothesis. That is, the sum can be formed by some combination of and dollar coins. Then, simply adding a dollar coin to that combination yields the sum . That is, holds. Q.E.D. Forward-backward induction Sometimes, it is more convenient to deduce backwards, proving the statement for , given its validity for . However, proving the validity of the statement for no single number suffices to establish the base case; instead, one needs to prove the statement for an infinite subset of the natural numbers. For example, Augustin Louis Cauchy first used forward (regular) induction to prove the inequality of arithmetic and geometric means for all powers of 2, and then used backwards induction to show it for all natural numbers. Example of error in the inductive step The inductive step must be proved for all values of n. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color: Base case: In a set of only one horse, there is only one color. Inductive step: Assume as induction hypothesis that within any set of horses, there is only one color. Now look at any set of horses. Number them: . Consider the sets and . Each is a set of only horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all horses. The base case is trivial (as any horse is the same color as itself), and the inductive step is correct in all cases . However, the logic of the inductive step is incorrect for , because of the statement that "the two sets overlap" is false (there are only horses prior to either removal and after removal, the sets of one horse each do not overlap). Formalization In second-order logic, one can write down the "axiom of induction" as follows: , where P(.) is a variable for predicates involving one natural number and k and n are variables for natural numbers. In words, the base case and the inductive step (namely, that the induction hypothesis implies ) together imply that for any natural number . The axiom of induction asserts the validity of inferring that holds for any natural number from the base case and the inductive step. The first quantifier in the axiom ranges | of the following: Showing that the statement holds when . Showing that if the statement holds for an arbitrary number , then the same statement also holds for . This can be used, for example, to show that for . In this way, one can prove that some statement holds for all , or even for all . This form of mathematical induction is actually a special case of the previous form, because if the statement to be proved is then proving it with these two rules is equivalent with proving for all natural numbers with an induction base case . Example: forming dollar amounts by coins Assume an infinite supply of 4- and 5-dollar coins. Induction can be used to prove that any whole amount of dollars greater than or equal to can be formed by a combination of such coins. Let denote the statement " dollars can be formed by a combination of 4- and 5-dollar coins". The proof that is true for all can then be achieved by induction on as follows: Base case: Showing that holds for is simple: take three 4-dollar coins. Induction step: Given that holds for some value of (induction hypothesis), prove that holds, too: Assume is true for some arbitrary . If there is a solution for dollars that includes at least one 4-dollar coin, replace it by a 5-dollar coin to make dollars. Otherwise, if only 5-dollar coins are used, must be a multiple of 5 and so at least 15; but then we can replace three 5-dollar coins by four 4-dollar coins to make dollars. In each case, is true. Therefore, by the principle of induction, holds for all , and the proof is complete. In this example, although also holds for , the above proof cannot be modified to replace the minimum amount of dollar to any lower value . For , the base case is actually false; for , the second case in the induction step (replacing three 5- by four 4-dollar coins) will not work; let alone for even lower . Induction on more than one counter It is sometimes desirable to prove a statement involving two natural numbers, n and m, by iterating the induction process. That is, one proves a base case and an inductive step for n, and in each of those proves a base case and an inductive step for m. See, for example, the proof of commutativity accompanying addition of natural numbers. More complicated arguments involving three or more counters are also possible. Infinite descent The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n. Prefix induction The most common form of proof by mathematical induction requires proving in the inductive step that whereupon the induction principle "automates" n applications of this step in getting from P(0) to P(n). This could be called "predecessor induction" because each step proves something about a number from something about that number's predecessor. A variant of interest in computational complexity is "prefix induction", in which one proves the following statement in the inductive step: or equivalently The induction principle then "automates" log2 n applications of this inference in getting from P(0) to P(n). In fact, it is called "prefix induction" because each step proves something about a number from something about the "prefix" of that number — as formed by truncating the low bit of its binary representation. It can also be viewed as an application of traditional induction on the length of that binary representation. If traditional predecessor induction is interpreted computationally as an n-step loop, then prefix induction would correspond to a log-n-step loop. Because of that, proofs using prefix induction are "more feasibly constructive" than proofs using predecessor induction. Predecessor induction can trivially simulate prefix induction on the same statement. Prefix induction can simulate predecessor induction, but only at the cost of making the statement more syntactically complex (adding a bounded universal quantifier), so the interesting results relating prefix induction to polynomial-time computation depend on excluding unbounded quantifiers entirely, and limiting the alternation of bounded universal and existential quantifiers allowed in the statement. One can take the idea a step further: one must prove whereupon the induction principle "automates" log log n applications of this inference in getting from P(0) to P(n). This form of induction has been used, analogously, to study log-time parallel computation. Complete (strong) induction Another variant, called complete induction, course of values induction or strong induction (in contrast to which the basic form of induction is sometimes known as weak induction), makes the inductive step easier to prove by using a stronger hypothesis: one proves the statement under the assumption that holds for all natural numbers less than ; by contrast, the basic form only assumes . The name "strong induction" does not mean that this method can prove more than "weak induction", but merely refers to the stronger hypothesis used in the inductive step. In fact, it can be shown that the two methods are actually equivalent, as explained below. In this form of complete induction, one still has to prove the base case, , and it may even be necessary to prove extra-base cases such as before the general argument applies, as in the example below of the Fibonacci number . Although the form just described requires one to prove the base case, this is unnecessary if one can prove (assuming for all lower ) for all . This is a special case of transfinite induction as described below, although it is no longer equivalent to ordinary induction. In this form the base case is subsumed by the case , where is proved with no other assumed; this case may need to be handled separately, but sometimes the same argument applies for and , making the proof simpler and more elegant. In this method, however, it is vital to ensure that the proof of does not implicitly assume that , e.g. by saying "choose an arbitrary ", or by assuming that a set of m elements has an element. Complete induction is equivalent to ordinary mathematical induction as described above, in the sense that a proof by one method can be transformed into a proof by the other. Suppose there is a proof of by complete induction. Let be the statement " holds for all such that ". Then holds for all if and only if holds for all , and our proof of is easily transformed into a proof of by (ordinary) induction. If, on the other hand, had been proven by ordinary induction, the proof would already effectively be one by complete induction: is proved in the base case, using no assumptions, and is proved in the inductive step, in which one may assume all earlier cases but need only use the case . Example: Fibonacci numbers Complete induction is most useful when several instances of the inductive hypothesis are required for each inductive step. For example, complete induction can be used to show that where is the nth Fibonacci |
Matrix (magazine), published by the Association for Women in Communications The Matrix, a 1994 novel by Denis MacEoin as Jonathan Aycliffe Matrix, a journal on printing published by Whittington Press Music Matrix (band), an American jazz-fusion ensemble Matrix (musician), a British producer and DJ Matrix (music), an element in musical variations that remains unchanged matrix, a 2000 album by Ryoji Ikeda Matrix (EP), by B.A.P, 2015 "Matrix", a song by Chick Corea from the 1968 album Now He Sings, Now He Sobs "Matrix", a song by Kate Pierson from the 2015 album Guitars and Microphones The Matrix (production team), a pop-music production team The Matrix (The Matrix album), 2009 Businesses and organisations Matrix Business Technologies or Matrix Telecom, Inc., an American telecommunications firm Matrix Chambers, a set of barristers' chambers in London and Geneva Matrix Feminist Design Co-operative, a London-based architectural collective, 1980–1994 Matrix Games, an American video game publisher Matrix Partners, an American private equity investment firm Matrix Software, or Matrix Corporation, a Japanese video game developer Matrix (club), a Berlin nightclub opened in 1996 The Matrix (club), a San Francisco nightclub, 1965–1972 The Matrix Theatre Company, in Los Angeles, California, US People David Krejčí (born 1986), Czech ice hockey player nicknamed "The Matrix" Shawn Marion (born 1978), American basketball player nicknamed "The Matrix" Marco Materazzi (born 1973), Italian footballer nicknamed "Matrix" Technology Matrix (mass spectrometry), a compound that promotes the formation of ions Matrix (numismatics), a tool used in coin manufacturing Matrix (printing), a mould for casting letters Matrix (protocol), an open standard for real-time communication Matrix (record production), or master, a disc | a eukaryotic organism's cells Matrix (chemical analysis), the non-analyte components of a sample Matrix (geology), the fine-grained material in which larger objects are embedded Hair matrix, produces hair Nail matrix, part of the nail in anatomy Arts and entertainment Fictional entities Matrix (comics), two comic book characters Matrix (Doctor Who), a computer system on the planet Gallifrey Matrix, a character from the Canadian animated TV series ReBoot Matrix (Neuromancer), a virtual-reality dataspace from the novel John Matrix, hero of the 1985 film Commando Irving Joshua Matrix, a fictitious creation of Martin Gardner Film and television Matrix (TV series), a 1993 Canadian fantasy series Matrix (talk show), a 2005–2012 Italian news and talk show Matrix of Leadership, in the Transformers franchise Games The Matrix: Path of Neo, a 2005 action-adventure video game The Matrix Online, a 2005 online multiplayer video game Literature Matrix (Perry and Tucker novel), a 1998 Doctor Who novel by Robert Perry and Mike Tucker Matrix (Groff novel), a 2021 novel by Lauren Groff The Matrix (magazine), published by the Association for Women in Communications The Matrix, a 1994 novel by Denis MacEoin as Jonathan Aycliffe Matrix, a journal on printing published by Whittington Press Music Matrix (band), an American jazz-fusion ensemble Matrix (musician), a British producer and DJ Matrix (music), an element in musical variations that remains unchanged matrix, a 2000 album by Ryoji Ikeda Matrix (EP), by B.A.P, 2015 "Matrix", a song by Chick Corea from the 1968 album Now He Sings, Now He Sobs "Matrix", a song by Kate Pierson from the 2015 album Guitars and Microphones The Matrix (production team), a pop-music production team The Matrix (The Matrix album), 2009 Businesses |
we suppose?" David Letterman said, "I'm always amazed at what people will fall for. We see this every ten or twelve years, an attempt at this, and I guess from that standpoint I don't quite understand why everybody's falling over backwards over the guy." Celebrity, cancellation, and bankruptcy The success of the show made Downey a pop culture celebrity, leading to appearances on Saturday Night Live in 1988, WrestleMania V in 1989 in which he traded insults with Roddy Piper and Brother Love on Piper's Pit, and later roles in movies such as Predator 2 and Revenge of the Nerds III: The Next Generation. He was also cast in several television roles, often playing tabloid TV hosts or other obnoxious media types. Downey notably starred in the Tales from the Crypt episode "Television Terror" which utilized several scenes shot by characters within the story, a format which became popular in horror films a decade later with the found footage genre. In 1989, Downey released an album of songs based on his show entitled Morton Downey Jr. Sings. The album's single, "Zip It!" (a catch-phrase from the TV show, used to quiet an irate guest), became a surprise hit on some college radio stations. Over the course of the 1988–89 television season, his TV show suffered a decline in viewership, resulting from many markets downgrading its time slot; even flagship station WWOR moved Downey's program from its original 9:00 PM slot to 11:30 PM in the fall of 1988. Beginning in January 1989, the time slot immediately following Downey's program was given to the then-new Arsenio Hall Show. Following Hall's strong early ratings, however, the two series swapped time slots several weeks later, thus relegating Downey to 12:30 AM in the number-one television market. In late April 1989, he was involved in an incident in a San Francisco International Airport restroom in which he claimed to have been attacked by neo-Nazis who painted a swastika on his face and attempted to shave his head. Some inconsistencies in Downey's account (e.g., the swastika was painted in reverse, suggesting that Downey had drawn it himself in a mirror), and the failure of the police to find supportive evidence, led many to suspect the incident was a hoax and a ploy for attention. In July 1989, his show was canceled, with the owners of the show announcing that the last episode had been taped on June 30, and that no new shows would air after September 15, 1989. At the time of its cancellation, the show was airing on a total of 70 stations across the country, and its advertisers had been reduced primarily to "direct-response" ads (such as 900 chat line and phone sex numbers). In February 1990, Downey filed for bankruptcy in the US Bankruptcy Court for the District of New Jersey. Later career In 1990, Downey resurfaced on CNBC with an interview program called Showdown, which was followed by three attempted talk radio comebacks: first in 1992 on Washington, D.C. radio station WWRC; then in 1993 on Dallas radio station KGBS, where he would scream insults at his callers. He was also hired as the station's VP of Operations. The following year, he returned to CNBC with a short-lived television show, Downey; in one episode, Downey claimed to have had a psychic communication with O.J. Simpson's murdered ex-wife, Nicole Brown Simpson. His third – and final – attempt at a talk radio comeback occurred in 1997 on Cleveland radio station WTAM in a late evening time slot. It marked his return to the Cleveland market, where Downey had been a host for crosstown radio station WERE in the early 1980s prior to joining KFBK. This stint came shortly after the surgery for lung cancer that removed one of his lungs. At WTAM, Downey abandoned the confrontational schtick of his TV and previous radio shows, and conducted this program in a much more conversational and jovial manner. On August 30, 1997, Downey quit his WTAM show to focus on pursuing legal action against Howard Stern. Downey had accused Stern of spreading rumors that he had resumed his smoking habit, to which publicist Les Schecter retorted, "He hasn't picked up a cigarette." His replacement was former WERE host Rick Gilmour. Following his death, news reports and obituaries incorrectly (according to the Orange County Register) credited him as the composer of "Wipe Out." As of 2008, Downey's official website (and others) continue to make this claim. Prior to Downey's death, Spin in April 1989 had identified the Wipe Out authorship as a myth. Controversies In 1984, at KFBK radio, Downey used the word "Chinaman" while telling a joke. His use of the word upset portions of the sizable Asian community in Sacramento. One Asian-American city councilman called for an apology and pressured the station for Downey's resignation. Downey refused to apologize and was forced to resign. Downey was sued for allegedly appropriating the words and music to his theme song from two songwriters. He was sued for $40 million after bringing then-stripper Kellie Everts onto the show and calling her a "slut", a "pig", a "hooker", and a "tramp", claiming that she had venereal diseases, and banging his pelvis against hers. In April 1988, he was arraigned on criminal charges for allegedly attacking a gay guest on his show, in a never-aired segment. In another lawsuit, he was accused of slandering a newscaster (a former colleague), and of indecently exposing himself to her and slapping her. Downey punched Stuttering John during an interview done for The Howard Stern Show, while also shouting verbal insults at John, referring to him as an "uneducated slob". The situation then began to evolve into a brawl between the two until Downey had to be pulled off of John by security; the entire incident was caught on camera. When an Inside Edition camera crew approached Downey in 1989 to question him about his involvement in an alleged | talk show host and actor who pioneered the "trash TV" format in the late-1980s on his program The Morton Downey Jr. Show. Early life Downey's parents were in show business; his father, Morton Downey, was a popular singer, and his mother, Barbara Bennett, was a stage and film actress and singer and dancer. Downey did not use his legal first name (Sean) in his stage name. His aunts included Hollywood film stars Constance and Joan Bennett, from whom he was estranged, and his maternal grandfather was the celebrated matinée idol Richard Bennett. Born into a wealthy family, he was raised during the summers next door to the Kennedy compound in Hyannis Port, Massachusetts. Downey attended New York University. Career He was a program director and announcer at radio station WPOP in Hartford, Connecticut in the 1950s. He went on to work as a disc jockey, sometimes using the moniker "Doc" Downey, in various markets around the U.S., including Phoenix (KRIZ), Miami (WFUN), Kansas City (KUDL), San Diego (KDEO) and Seattle (KJR). He had to resign from WFUN after drawing ire from the FCC for announcing a competing disc jockey's home phone number on the air and insulting his wife. Like his father, Downey pursued a career in music, recording in both pop and country styles. He sang on a few records and then began to write songs, several of which were popular in the 1950s and 1960s. He joined ASCAP as a result. In 1958, he recorded "Boulevard of Broken Dreams", which he sang on national television on a set that resembled a dark street with one street light. In 1981, "Green Eyed Girl" charted on the Billboard Magazine country chart, peaking at No. 95. In the 1980s, Downey was a talk show host at KFBK-AM in Sacramento, California, where he employed his abrasive style. He was fired in 1984, and was subsequently replaced by Rush Limbaugh. Downey also had a stint on WMAQ-AM in Chicago where he unsuccessfully tried to get other on air radio personalities to submit to drug testing. Downey's largest effect on American culture came from his popular, yet short-lived, syndicated late 1980s television talk show, The Morton Downey Jr. Show. Pro-life activism On January 22, 1980, Downey, a devoted pro-life activist, hosted the California State Rally for Life at the invitation of the California ProLife Council and United Students for Life. At that time, he was also running for President of the United States, as a Democrat. The United Students for Life, at California State University, Sacramento helped organize his California presidential rallies. Downey worked to help promote anti-abortion candidates in California and around the country. Television Downey headed to Secaucus, New Jersey, where his highly controversial television program The Morton Downey Jr. Show was taped. Starting as a local program on New York-New Jersey superstation WWOR-TV in October 1987, it expanded into national syndication in early 1988. The program featured screaming matches among Downey, his guests, and audience members. Using a large silver bowl for an ashtray, he would chainsmoke during the show and blow smoke in his guests' faces. Downey's fans became known as "Loudmouths", patterned after the studio lecterns decorated with gaping cartoon mouths, from which Downey's guests would go head-to-head against each other on their respective issues. Downey's signature phrases "pablum puking liberal" (in reference to left-liberals) and "zip it!" briefly enjoyed some popularity in the contemporary vernacular. He particularly enjoyed making his guests angry with each other, which on a few occasions resulted in physical confrontations. One such incident occurred on a 1988 show taped at the Apollo Theater, involving Al Sharpton and CORE National Chairman Roy Innis. The exchange between the two men culminated in Innis shoving Sharpton into his chair, knocking him to the floor and Downey intervening to separate the pair. Downey briefly took his show on the road in 1989 holding concert like events across the country. Because of the controversial format and content of the show, distributor MCA Television had problems selling the show to a number of stations and advertisers. Even Downey's affiliates, many of which were low-rated independent television stations in small to medium markets, were so fearful of advertiser and viewer backlash that they would air one or even two local disclaimers during the broadcast. During one controversial episode Downey introduced his gay brother, Tony Downey, to his studio audience and informed them Tony was HIV positive. During the episode Downey stated he was afraid his audience would abandon him if they knew he had a gay brother, but then said he did not care. The Washington Post wrote about him, "Suppose a maniac got hold of a talk show. Or need we suppose?" David Letterman said, "I'm always amazed at what people will fall for. We see this every ten or twelve years, an attempt at this, and I guess from that standpoint I don't quite understand why everybody's falling over backwards over the guy." Celebrity, cancellation, and bankruptcy The success of the show made Downey a pop culture celebrity, leading to appearances on Saturday Night Live in 1988, WrestleMania V in 1989 in which he traded insults with Roddy Piper and Brother Love on Piper's Pit, and later roles in movies such as Predator 2 and Revenge of the Nerds III: The Next Generation. He was also cast in several television roles, often playing tabloid TV hosts or other obnoxious media types. Downey notably starred in the Tales from the Crypt episode "Television Terror" which utilized several scenes shot by characters within the story, a format which became popular in horror films a decade later with the found footage genre. In 1989, Downey released an album of songs based on his show entitled Morton Downey Jr. Sings. The album's single, "Zip It!" (a catch-phrase from the TV show, used to quiet an irate guest), became a surprise hit on some college radio stations. Over the course of the 1988–89 television season, his TV show suffered a decline in viewership, resulting from many markets downgrading its time slot; even flagship station WWOR moved Downey's program from its original 9:00 PM slot to 11:30 PM in the fall of 1988. Beginning in January 1989, the time slot immediately following Downey's program was given to the then-new Arsenio Hall Show. Following Hall's strong early ratings, however, the two series swapped time slots several weeks later, thus relegating Downey to 12:30 AM in the number-one television market. In late April 1989, he was involved |
Winner of 1 Grand Slam title → 2009 U.S. Open champion • 2009 Tour finals finalist • 2016 Olympics silver medalist • ranking in 2018 |- |||1999||–|| Australia|| || 18 || 2020 US Open quarterfinalist • ranking in 2019 |- |||1950||–|| Australia|| || 17 || 1974 Australian Open finalist • ranking in 1977 |- |||1981||–|| United States|| || 21 || Ranking in 2005 |- |||1956||–|| United States|| || 12 || 1981 and 1982 Australian Open finalist • ranking in 1983 |- |||1917||2002|| France|| || || 1937 French Championships semifinalist |- |||1972||–|| Belgium|| || || 1997 French Open semifinalist |- |||1944||–|| Australia|| || || 1979 Australian Open semifinalist |- |||1951||–|| United States|| || 5 || 1975 and 1976 French Open semifinalist • ranking in 1978 |- |||1959||–|| United States|| || || 1983 U.S. Open quarterfinalist |- |||1942||–|| South Africa|| || || 1965 Wimbledon quarterfinalist |- |||1991||–|| Bulgaria|| || 3 || 2014 Wimbledon semifinalist • 2017 Australian Open semifinalist • 2019 US Open semifinalist • ranking in 2017 • 2017 Tour Finals champion |- |||1873||1939|| Great Britain|| || || 1901, 1911 Wimbledon finalist |- |||1979||–|| France|| || || 2000 Olympic bronze medalist |- |||1987||–|| Serbia|| || 1|| Winner of 20 Grand Slam titles including a double Career Grand Slam in 2016 and 2021 → 2008/2011/2012/2013/2015/2016/2019/2020/2021 Australian Open champion (9) • 2011/2014/2015/2018/2019/2021 Wimbledon champion (6) • 2011/2015/2018 U.S. Open champion (3) • 2016/2021 French Open champion (2) • 2008/12/13/14/15 Tour finals champion (5) • 2008 Olympic bronze medalist • ranking world no. 1 at a record of 329 weeks (2011-2021) |- |||1908||1978|| United States|| 1962 || || 1930 United States champion |- |||1875||1919|| Great Britain|| 1980 || || Winner of 6 Grand Slam titles and 1 Olympic gold medal → 1902, 1903, 1904, 1905 and 1906 Wimbledon champion • 1903 United States champion • 1900 Olympic gold medalist |- |||1872||1910|| Great Britain|| 1980 || || Winner of 4 Grand Slam titles → 1897, 1898, 1899 and 1900 Wimbledon champion • 1902 United States runner-up |- |||1988||–|| Ukraine|| || 13 || 2011 Australian Open quarterfinalist • ranking in 2012 |- |||1926||2006|| United States|| || || 1950 French Championships quarterfinalist |- |||1970||–|| Czech Republic|| || || 1999 U.S. Open quarterfinalist |- |||1879||1961|| Australia|| || || 1913 Wimbledon finalist |- |||1975||–|| Germany|| || || 1994 French Open quarterfinalist |- |||1958||2013|| Australia|| || || 1975 Australian Open quarterfinalist |- |||1921||2001||// Czechoslovakia/Egypt|| 1983 || || Winner of 3 Grand Slam titles → 1951 and 1952 French champion • 1954 Wimbledon champion |- |||1941||–|| South Africa|| 2013 || 13 || 1968 U.S. Open quarterfinalist • 1969 Wimbledon quarterfinalist • 1971 Australian Open quarterfinalist • ranking in 1974 |- |||1952||–|| Great Britain|| || || 1977 (December) Australian Open quarterfinalist |- |||1954||–|| United States|| || 14 || 1979 Wimbledon semifinalist • ranking in 1980 |- |||1852||1917|| United States|| 1955 || || 1883 U.S. Championship finalist |- id=E |||1867||1920|| Great Britain|| || || 1895, 1896, 1897 Wimbledon finalist • 1897(Ch) U.S. Championships finalist • 1908 Olympic bronze medalist |- |||1966||–|| Sweden|| 2004 || 1|| Winner of 6 Grand Slam titles → 1985 and 1987 Australian Open champion • 1988 and 1990 Wimbledon champion • 1991 and 1992 U.S. Open champion • 1989 Masters Grand Prix champion • ranking no. 1 for 72 weeks → 21 weeks in 1990, 40 in 1991 and 11 in 1992 |- |||1954||–|| Australia|| || 15 || Winner of 1 Grand slam title → 1976 Australian Open champion • ranking in 1982 |- |||1995||–|| Great Britain|| || 14 || 2018 Australian Open semifinalist • ranking in 2018 |- |||1971||–|| Morocco|| || 14 || 2000 and 2003 Australian Open quarterfinalist • 2002 and 2003 U.S. Open quarterfinalist • ranking in 2003 |- |||1947||–|| Egypt|| || || 1974 Wimbledon quarterfinalist |- |||1970||–|| Netherlands|| || 19 || 1995 Australian Open quarterfinalist, 1995 Wimbledon quarterfinalist • ranking in 1995. |- |||1936||–|| Australia|| 1982 || 1|| Winner of 12 Grand Slam titles → 1961, 1963, 1965, 1966 and 1967 Australian champion • 1961 and 1964 United States champion • 1963 and 1967 French champion • 1964 and 1965 Wimbledon champion • ranking world no. 1 amateur for two years, 1964 and 1965 |- |||1974||–|| Sweden|| || 4 || 1999 Australian Open finalist, 1996 quarterfinalist • 2001 Wimbledon quarterfinalist • ranking in 199 |- |||1976||–|| France|| || 17 || 1998 Australian Open semifinalist • ranking in 2000 |- |||1857||1916|| Great Britain || || || 1878 Wimbledon All Comers finalist |- |||1990||–|| Great Britain|| || 25 || Ranking in 2021 |- |||1961||–|| New Zealand|| || || 1987 Australian Open quarterfinalist |- id=F |||1948||–|| New Zealand|| || 24 || Ranking in 1973 |- |||1926||–|| United States|| 1974 || || Winner of 1 Grand Slam title → 1948 Wimbledon champion |- |||1981||–|| Switzerland|| || 1 || Winner of 20 Grand Slam titles and a career Grand Slam completed in 2009 ◌ 2003/2004/2005/2006/2007/2009/2012/2017 Wimbledon champion (8) • 2004/2006/2007/2010/2017/2018 Australian Open champion (6) • 2004/2005/2006/2007/2008 U.S. Open champion (5) • 2009 French Open champion • 2003/2004/2006/2007/2010/2011 Tour Finals champion (6) • 2012 Olympics silver medalist ◌ Ranking: world no. 1 for 310 weeks (2004-2018) of which 237 consecutive (also a record) (2004-2008) |- |||1951||–|| Austria|| || || 1978 Australian Open quarterfinalist |- |||1971||–||/ South Africa|| || 6 || 1992 and 2003 Australian Open semifinalist • 1992 Olympic silver medalists • ranking in 1995 |- |||1982||–|| Spain|| || 3 || 2013 French Open finalist • 2007 Tour Finals finalist • ranking in 2013 |- |||1980||–|| Spain|| || 1|| Winner of 1 Grand Slam title → 2003 French Open champion • 2002 Tour Finals finalist, 2001 semifinalist • ranking world no. 1 for 8 weeks, in 2003 |- |||1952||–|| Poland|| || 10 || 1977 and 1980 French Open quarterfinalist • 1980 Wimbledon quarterfinalist • 1980 U.S. Open quarterfinalist • ranking in 1977 |- |||1967||–|| Uruguay|| || || 1999 French Open quarterfinalist |- |||1946||–|| Chile|| || 14 || 1975 U.S. Open quarterfinalist • ranking in 1974 |- |||1981||–|| United States|| || 7 || 2007 Australian Open quarterfinalist • 2008 U.S. Open quarterfinalist • 2011 Wimbledon quarterfinalist • 2004 Olympic single silver medalist • ranking in 2011 |- |||1960||–|| Australia|| || 25 || Ranking in 1988 |- |||1928||1980|| United States|| || 5 || 1950 United States finalist • 1957 French finalist • ranking in 1957 |- |||1955||–|| United States|| || 8 || 1980 Wimbledon quarterfinalist • ranking in 1980 |- |||1987||–|| Italy|| || 9 || 2011 French Open quarterfinalist • ranking in 2019 |- |||1934||2020|| South Africa|| || || 1962 United States quarterfinalist |- |||1965||–|| France|| || 4 || 1991 and 1993 Australian Open quarterfinalist • 1991, 1992 and 1994 Wimbledon quarterfinalist • ranking in 1991 |- |||1947||–|| Yugoslavia|| || 8 || 1970 French Open finalist • ranking in 1991 |- |||1933||–|| Australia|| 1984 || 1 || Winner of 3 Grand Slam titles → 1959 and 1960 United States champion • 1960 Wimbledon champion • amateur No. 1 ranking, 1959 and 1960 |- |||1952||–|| Australia|| || || 1979 Australian Open quarterfinalist |- |||1997||–|| United States|| || 24 || Ranking in 2020 |- |||1942||2020|| United States|| || || 1971 French Open semifinalist |- |||1884||1962|| Germany|| || || 1914 Wimbledon finalist • 1908 Olympic silver medalist |- |||1970||–|| Australia|| || 24 || Ranking in 1990 |- |||1970||–|| Italy|| || 19 || 1995 French Open quarterfinalist • ranking in 1996 |- id=G |||1977||–|| United States|| || 14 || 2004 Wimbledon quarterfinalist • ranking in 2001 |- |||1983||–|| Spain|| || 23 || Ranking in 2011 |- |||1996||–|| Chile|| || 18 || Ranking in 2020 |- |||1898||1971|| United States|| || || 1919, 1920 Wimbledon semifinalist |- |||1986||–|| France|| || 7 || 2007 and 2015 Wimbledon semifinalist • 2013 US Open semifinalist • ranking in 2007 |- |||1973||–|| Italy|| || 18 || Ranking in 1995 |- |||1978||–|| Argentina|| || 5 || Winner of 1 Grand Slam title → 2004 French Open champion • 2005 Tour Finals semifinalist • ranking in 2005 |- |||1954||1994|| United States|| || 3 || Winner of 1 Grand Slam title → 1977(December) Australian Open champion • 1979 and 1981 Masters Grand Prix finalist • ranking in 1978 |- |||1934||–|| United States|| || || 1955 U.S. Championships quarterfinalist |- |||1963||–|| United States|| || || 1982 Australian Open quarterfinalist |- |||1949||–|| Australia|| || 16 || Ranking in 1974 ◌ 1977(December) Australian Open semifinalist |- |||1982||–|| United States|| || 15 || Ranking in 2005 ◌ 2005 U.S. Open semifinalist |- |||1938||2019|| Spain|| 2009 || 10 || Winner of one Grand Slam titles → 1972 French champion • ranking amateur in 1969 |- |||1942||–||/ Spain|| || || 1968 Australian Championships finalist - 1975 Masters Grand Prix champion, partnering Manuel Orantes |- |||1958||–|| United States|| || || 1982 Australian Open quarterfinalist |- |||1961||–|| United States|| || 4 || Ranking in 1990 ◌ 1987 U.S. Open quarterfinalist • 1990 Wimbledon quarterfinalist • 1988 Olympics bronze medalist |- |||1956||–|| Chile|| || 12 || Ranking in 1980 ◌ 1978/1979/1980 French Open quarterfinalist |- |||1958||–|| Israel|| || 22 || Ranking in 1982 ◌ 1981 Australian Open quarterfinalist |- |||1860||1939|| United States|| || || 1881 U.S. Championships finalist |- |||1890||1951|| France|| || || 1912 Wimbledon finalist • 1912 Olympic gold medalist |- |||1990||–|| Belgium|| || 7 || 2016 French Open quarterfinalist • 2017 Australian Open quarterfinalist • 2019 Wimbledon quarterfinalist • 2017 Tour Finals finalist • ranking in 2017 |- |||1963||–|| United States|| || || 1989 Wimbledon quarterfinalist |- |||1973||2017|| France|| || 22 || Ranking in 1999 |- |||1960||–|| Ecuador|| || 4 || Winner of 1 Grand Slam title → 1990 French Open champion, 1984, 1986 and 1987 quarterfinalist • 1984 Wimbledon quarterfinalist • 1984 U.S. Open quarterfinalist • ranking in 1990 |- |||1928||1995|| United States|| 1968 || || Winner of 2 Grand Slam titles → 1948 and 1949 United States champion • 1968 French Open semifinalist • 1968 U.S. Open quarterfinalist • rated world no. 1 for 8 years, 1952, 1954, 1955, 1956, 1957, 1958, 1959 and (as co-no.1) 1960 |- |||1980||–|| Chile|| || 5 || 2007 Australian Open finalist • 2008 Olympic silver medalist, 2004 bronze medalist • ranking in 2007 |- |||1853||1909|| Ireland|| || || 1879 Wimbledon finalist |- |||1868||1928|| Great Britain|| 2006 || || Winner of 3 Grand Slam titles → 1900, 1901 and 1909 Wimbledon champion • 1908 Olympic gold medallist |- | |1850||1906|| Great Britain|| || || Winner of 1 Grand Slam title → 1877 Wimbledon champion, 1878 finalist |- |||1946||–|| United States|| || 8 || 1971 Wimbledon semifinalist • 1972 U.S. Open semifinalist • 1973 French Open semifinalist • ranking in 1973 |- |||1952||–|| United States|| || 3 || 1977 French Open finalist • ranking in 1977 |- |||1948||–|| France|| || || 1970 French Open semifinalist |- |||1943||–|| United States|| || 7 || 1967 United States finalist • ranking in 1968 |- |||1910||1986|| United States|| 1972 || 6 || 1935 U.S. Open semifinalist • ranking in 1937 |- |||1903||1959|| Great Britain|| || || Winner of 1 Grand Slam title → 1929 Australian champion |- |||1920||2006|| United States|| || || 1942, 1943, 1944, and 1945 U.S. National Championships quarterfinalist |- |||1860||1930|| Great Britain|| || || Finalist in 1884 Wimbledon Championships – Gentlemen's Singles |- |||1978||–|| France|| || 4 || 2001 Australian Open semifinalist • 2001 French Open semifinalist • 2003 and 2004 Wimbledon semifinalist • 2001 Tennis Masters Cup finalist • winner of 1 ATP Masters Series event • ranking in 2002 |- |||1988||–|| Latvia|| || 10 || 2014 French Open semifinalist• ranking in 2014 |- |||1951||1996|| United States|| || 15 || 1979 Wimbledon quarterfinalist • ranking in 1979 |- |||1951||–|| United States|| || || 1982 U.S. Open quarterfinalist |- |||1931||2000|| Hungary|| || || 1966 French finalist, 1971 quarterfinalist |- |||1962||–|| Sweden|| || 25 || 1989 Australian Open semifinalist • ranking in 1985 |- |||1959||–|| Switzerland|| || 22 || 1985 Wimbledon quarterfinalist • 1985 U.S. Open quarterfinalist • ranking in 1986 |- |||1967||–|| Sweden|| || 10 || 1994 Australian Open quarterfinalist • ranking in 1991 |- id=H |||1966||–|| Netherlands|| || 18 || 1991 U.S. Open quarterfinalist • ranking in 1995 |- |||1978||–|| Germany|| || 2 || 2000 Olympic silver medalist • 1999/2002/2007 Australian Open semifinalist • 2009 Wimbledon semifinalist • ranking in 2002 |- |||1878||1937|| United States|| 1961 || || 1906 United quarterfinalist |- |||1855||1946|| Great Britain|| || || Winner of 1 Grand Slam title → 1878 Wimbledon champion, 1879 runner-up |- |||1872||1932|| United States|| || || 1892 U.S. Championships semifinalist |- |||1867||1934|| United States|| || || 1891 U.S. Championships semifinalist |- |||1864||1943|| Ireland|| || || Winner of 1 Grand Slam title → 1890 Wimbledon champion, 1889 semifinalist |- |||1981||–|| Romania|| || || 2005 French Open quarterfinalist |- |||1915||1996|| Great Britain|| || || 1937 French Championships quarterfinalist, 1937 U.S. Championships quarterfinalist |- |||1961||–|| United States|| || || 1982 U.S. Open quarterfinalist |- |||1849||1935|| Great Britain|| || || Winner of 2 Grand Slam titles → 1879 and 1880 Wimbledon champion, 1881 runner-up |- |||1899||1990|| Australia|| || || Winner of 1 Grand Slam title → 1926 Australasian champion • 1928 French Championships semifinalist |- |||1841||1915|| Great Britain|| || || 1877 Wimbledon All-Comers semifinalist |- |||1915||1943||/ Germany|| || || Winner of 1 Grand Slam title → 1937 French champion • 1938, 1939 Wimbledon semifinalist |- |||1974||–|| Great Britain|| || 4 || 1998, 1999, 2001 and 2002 Wimbledon semifinalist • 2004 French Open semifinalist • 2004 U.S. Open semifinalist • ranking in 2002 |- |||1900||1981|| United States|| || 8 || Ranking in 1927 and 1928 |- |||1940||–|| Australia/ South Africa|| || 6 || 1960, 1962 and 1963 Australian semifinalist • ranking amateur, 1967 |- |||1981||–|| Australia|| || 1 || Winner of 2 Grand Slam titles → 2001 U.S. Open champion • 2002 Wimbledon champion • 2001/2002 Tour Finals champion • ranking no. 1 for 80 weeks |- |||1953||–|| Spain|| || 6 || 1982 and 1983 French Open semifinalist, 1977 and 1979 quarterfinalist - ranking in 1983 |- |||1964||–|| Switzerland|| || 22 || 1991 French Open quarterfinalist • ranking in 1985 |- |||1934||1994|| Australia|| 1980 || 1 || Winner of 4 Grand Slam titles → 1956 and 1957 Wimbledon champion • 1956 French champion • 1956 Australian champion • ranking world no. 1 amateur for 2 years, 1953, 1956. ranking world no. 1 pro 1959 Ampol points |- |||1870||1930|| United States|| || || 1891, 1905 U.S. Championships finalist • 1898 Wimbledon semifinalist |- |||1968||–|| Sweden|| || 17 || Ranking in 1993 |- |||1938||–|| United States|| || 7 || 1959 U.S. Championships semifinalist • 1961 French Championships quarterfinalist • ranking in 1960 |- |||1963||–|| United States|| || 22 || Ranking in 1985 |- |||1958||–|| United States|| || 17 || Ranking in 1982 |- |||1906||1985|| Australia|| 1978 || || 1930, 1931 and 1932 Australian Championships finalist |- |||1868||1945|| United States|| 1974 || || 1895 United States champion, 1896 finalist |- |||1978||–|| Slovakia|| || 12 || 1999 French Open semifinalist • ranking in 2004 |- |||1950||–|| Czechoslovakia|| || 25 || Ranking in 1974 |- |||1902||1997|| Great Britain|| || || 1931 French Championships semifinalist |- |||1998||–|| France|| || 25 || Ranking in 2021 |- |||1919||1945|| United States|| 1966 || || Winner of 1 Grand Slam title → 1943 U.S. champion |- |||1894||1981|| United States|| 1961 || || 1923 Wimbledon finalist • 1928 and 1929 United States finalist |- |||1997||–|| Poland|| || 11 || 2021 Wimbledon semifinalist • ranking in 2021 |- id=I |||1985||–|| United States|| || 8 || 2018 Wimbledon semifinalist • 2011/2018 U.S. Open quarterfinalist • ranking in 2018 |- |||1971||–|| Yugoslavia / Croatia|| || 2 || Winner of 1 Grand Slam title → 2001 Wimbledon champion • 1996 U.S. Open semifinalist • ranking in 1994 |- id=J |||1879||1977|| India|| || || 1925 French Championships semifinalist, 1925 Wimbledon quarterfinalist |- |||1964||–|| Argentina|| || 10 || 1985 French Open quarterfinalist • ranking in 1990 |- |||1942||–|| France|| || 20 || 1974 French Open semifinalist; 1966 French Championships semifinalist • ranking in 1974 |- |||1961||–|| Sweden|| || 5 || 1985 Wimbledon semifinalist • ranking in 1985 |- |||1982||–|| Sweden|| || 9 || 2004 U.S. Open semifinalist • ranking in 2005 |- |||1975||–|| Sweden|| || 7 || Winner of 1 Grand Slam title → 2002 Australian Open champion • 2005 Wimbledon semifinalist • 1998/2000 US Open quarterfinalist • ranking in 2002 |- |||1989||–|| United States|| || 21 || Ranking in 2016 • 2016 Olympics quarterfinalist |- |||1894||1946|| United States|| 1958 || 1 || Winner of 3 Grand Slam titles → 1915 and 1919 United States champion • 1923 Wimbledon champion (results incomplete as tournament drawsheets unavailable) • co-ranking world no. 1 for 1919 |- |||1939||–|| Yugoslavia|| || || 1968 French Open quarterfinalist |- id=K |||1974||–|| Russia|| 2019 || 1 || Winner of 2 Grand Slam titles and 1 Olympic gold medal → 1996 French Open champion • 1999 Australian Open champion • 2000 Olympic gold medalist • ranking no. 1 for 6 weeks, in 1999 |- |||1993||-|| Russia|| || 24 || 2021 Australian Open semifinalist • ranking in 2021 |- |||1968||–|| Germany|| || 22 || 1996 French Open quarterfinalist • 1996 US Open quarterfinalist • ranking in 1995 |- |||1979||–|| Croatia|| || 14 || 2009 Wimbledon quarterfinalist • ranking in 2008 |- |||1891||1937|| Hungary|| || || 1926, 1929 French Championships quarterfinalist • 1929 Wimbledon quarterfinalist |- |||1996||–|| Russia || || 8 || Ranking in 2019 • 2019 French Open quarterfinalist |- |||1977||–|| Germany|| || 4 || 2006 Australian Open semifinalist, 1998/2000 quarterfinalist • 1997 Wimbledon quarterfinalist • 2000 U.S. Open quarterfinalist • 1999 Tour Finals semifinalist • ranking in 2000 |- |||1899||1966|| United States|| || || 1926 Wimbledon finalist |- |||1888||1964|| Great Britain|| || || Winner of 1 Grand Slam title → 1919 Australian champion • 1919 Wimbledon finalist |- |||1911||1994|| South Africa|| || || 1934 U.S. Championships semifinalist • 1934 Wimbledon quarterfinalist |- |||1989||–|| Slovakia|| || 24 || Ranking in 2015 |- |||1863||1917|| United States|| || || 1885, 1890 U.S. Championships finalist |- |||1935||–|| Great Britain|| || || 1959 French quarterfinalist |- |||1945||–|| Brazil|| || 24 || 1969 French Open quarterfinalist • ranking in 1974 |- |||1946||–|| Czechoslovakia|| 1990 || || | || 12 || 1999 French Open semifinalist • ranking in 2004 |- |||1950||–|| Czechoslovakia|| || 25 || Ranking in 1974 |- |||1902||1997|| Great Britain|| || || 1931 French Championships semifinalist |- |||1998||–|| France|| || 25 || Ranking in 2021 |- |||1919||1945|| United States|| 1966 || || Winner of 1 Grand Slam title → 1943 U.S. champion |- |||1894||1981|| United States|| 1961 || || 1923 Wimbledon finalist • 1928 and 1929 United States finalist |- |||1997||–|| Poland|| || 11 || 2021 Wimbledon semifinalist • ranking in 2021 |- id=I |||1985||–|| United States|| || 8 || 2018 Wimbledon semifinalist • 2011/2018 U.S. Open quarterfinalist • ranking in 2018 |- |||1971||–|| Yugoslavia / Croatia|| || 2 || Winner of 1 Grand Slam title → 2001 Wimbledon champion • 1996 U.S. Open semifinalist • ranking in 1994 |- id=J |||1879||1977|| India|| || || 1925 French Championships semifinalist, 1925 Wimbledon quarterfinalist |- |||1964||–|| Argentina|| || 10 || 1985 French Open quarterfinalist • ranking in 1990 |- |||1942||–|| France|| || 20 || 1974 French Open semifinalist; 1966 French Championships semifinalist • ranking in 1974 |- |||1961||–|| Sweden|| || 5 || 1985 Wimbledon semifinalist • ranking in 1985 |- |||1982||–|| Sweden|| || 9 || 2004 U.S. Open semifinalist • ranking in 2005 |- |||1975||–|| Sweden|| || 7 || Winner of 1 Grand Slam title → 2002 Australian Open champion • 2005 Wimbledon semifinalist • 1998/2000 US Open quarterfinalist • ranking in 2002 |- |||1989||–|| United States|| || 21 || Ranking in 2016 • 2016 Olympics quarterfinalist |- |||1894||1946|| United States|| 1958 || 1 || Winner of 3 Grand Slam titles → 1915 and 1919 United States champion • 1923 Wimbledon champion (results incomplete as tournament drawsheets unavailable) • co-ranking world no. 1 for 1919 |- |||1939||–|| Yugoslavia|| || || 1968 French Open quarterfinalist |- id=K |||1974||–|| Russia|| 2019 || 1 || Winner of 2 Grand Slam titles and 1 Olympic gold medal → 1996 French Open champion • 1999 Australian Open champion • 2000 Olympic gold medalist • ranking no. 1 for 6 weeks, in 1999 |- |||1993||-|| Russia|| || 24 || 2021 Australian Open semifinalist • ranking in 2021 |- |||1968||–|| Germany|| || 22 || 1996 French Open quarterfinalist • 1996 US Open quarterfinalist • ranking in 1995 |- |||1979||–|| Croatia|| || 14 || 2009 Wimbledon quarterfinalist • ranking in 2008 |- |||1891||1937|| Hungary|| || || 1926, 1929 French Championships quarterfinalist • 1929 Wimbledon quarterfinalist |- |||1996||–|| Russia || || 8 || Ranking in 2019 • 2019 French Open quarterfinalist |- |||1977||–|| Germany|| || 4 || 2006 Australian Open semifinalist, 1998/2000 quarterfinalist • 1997 Wimbledon quarterfinalist • 2000 U.S. Open quarterfinalist • 1999 Tour Finals semifinalist • ranking in 2000 |- |||1899||1966|| United States|| || || 1926 Wimbledon finalist |- |||1888||1964|| Great Britain|| || || Winner of 1 Grand Slam title → 1919 Australian champion • 1919 Wimbledon finalist |- |||1911||1994|| South Africa|| || || 1934 U.S. Championships semifinalist • 1934 Wimbledon quarterfinalist |- |||1989||–|| Slovakia|| || 24 || Ranking in 2015 |- |||1863||1917|| United States|| || || 1885, 1890 U.S. Championships finalist |- |||1935||–|| Great Britain|| || || 1959 French quarterfinalist |- |||1945||–|| Brazil|| || 24 || 1969 French Open quarterfinalist • ranking in 1974 |- |||1946||–|| Czechoslovakia|| 1990 || || Winner of 3 Grand Slam titles → 1970 and 1971 French Open champion • 1973 Wimbledon champion |- |||1983||–|| Germany|| || 16 || 2012 Wimbledon quarterfinalist • ranking in 2012 |- |||1968||–|| Czechoslovakia / Czech Republic|| || 2 || Winner of 1 Grand Slam title → 1998 Australian Open champion • ranking in 1998 |- |||1977||–|| Austria|| || 20 || 2002 Australian Open quarterfinalist • ranking in 2000 |- |||1904||1979|| Czechoslovakia|| || || 1926 and 1927 Wimbledon quarterfinalist |- |||1895||1950|| Czechoslovakia|| 2006 || || Rated professional world no. 1 for four years, 1926, 1927, 1928 and 1929 |- |||1971||–|| Netherlands|| || 4 || Winner of 1 Grand Slam title → 1996 Wimbledon champion • ranking in 1999 |- |||1921||2009|| United States|| 1968 || || Winner of 3 Grand Slam titles → 1946 and 1947 United States champion • 1947 Wimbledon champion • rated world no. 1 for 5 years → 1947, 1948, 1949, 1950 and 1951 |- |||1887||1968|| Germany|| || || 1913 Wimbledon semifinalist • 1912 Olympic bronze medalist |- |||1967||–|| United States|| || 6 || 1989 U.S. Open semifinalist • 1995 Australian Open semifinalist • ranking in 1990 |- |||1958||–|| South Africa/ USA|| || 7 || Winner of 2 Grand Slam titles → 1981 and 1982 Australian Open champion • ranking in 1984 |- |||1937||–|| India|| || || 1960 and 1961 Wimbledon semifinalist |- |||1961||–|| India|| || 23 || 1981 and 1987 U.S. Open quarterfinalist • 1986 Wimbledon quarterfinalist • ranking in 1985 |- |||1954||–|| Australia|| || || 1978 Australian Open quarterfinalist |- |||1982||–|| Poland|| || || 2013 Wimbledon quarterfinalist |- |||1976||–|| Brazil|| 2012 || 1|| Winner of 3 Grand Slam titles → 1997/2000/2001 French Open champion • 2000 Tennis Masters Cup champion • ranking no. 1 for 43 weeksin 2000-2001 |- |||1974||–|| Slovakia|| || 6 || 1998 Australian Open semifinalist • ranking in 1998 |- |||1971||–|| Sweden|| || || 1992 French Open quarterfinalist |- |||1890||1968|| Japan|| || || 1918 U.S. Championships semifinalist • 1920 Olympics silver medalist |- |||1995||–|| Australia|| || 13 || 2015 Australian Open quarterfinalist • 2014 Wimbledon quarterfinalist • Ranking in 2016 |- id=L |||1904||1996|| France|| 1976 || 1|| Winner of 7 Grand Slam titles → 1925, 1927 and 1929 French champion, 1926 and 1928 finalist • 1925 and 1928 Wimbledon champion, 1924 finalist, 1927 semifinalist • 1926 and 1927 United States champion • rated world no. 1 for 2 years |- |||1990||–|| Serbia|| || 23 || Ranking in 2019 |- |||1976||–|| Ecuador|| || 6 || 1999 Australian Open semifinalist • ranking in 1999 |- |||1872||1926|| United States|| 1956 || || Winner of 7 Grand Slam titles → 1901, 1902, 1907, 1908, 1909, 1910 and 1911 United States champion, 1900 and 1903 finalist • rated world no. 1 for 5 years → 1901 and 1902 (co-rated), 1908, 1909 and 1910 |- |||1925||2012|| United States|| 1969 || || Winner of 1 Grand Slam title → 1950 United States champion, 1954 finalist |- |||1970||–|| Sweden|| || 10 || 1994 French Open semifinalist • ranking in 1995 |- |||1938||–|| Australia|| 1981 || || Winner of 11 Grand Slam titles → 1960 and 1962 Australian champion; 1969 Australian Open champion • 1962 French champion; 1969 French Open champion, 1968 finalist • 1961, 1962, 1968 and 1969 Wimbledon champion • 1962 United States champion; 1969 U.S. Open champion • 1970 Masters Grand Prix finalist • rated world no. 1 for 7 years → 1964 (co-rated), 1965, 1966, 1967, 1968, 1969 and 1970 (co-rated) |- |||1851||1925|| Great Britain|| 2006 || || Winner of 1 Grand Slam title → 1887 Wimbledon champion, 1880, 1884, 1885, 1886 and 1888 finalist, 1878, 1881 and 1882 and All-Comers semifinalist |- |||1963||–|| France|| || 5 || 1988 French Open finalist • ranking in 1986 |- |||1907||1998|| Great Britain|| || || 1933 French Championships semifinalist |- |||1960||–|| Czechoslovakia/ United States|| 2001 || 1 || winner of 8 Grand Slam titles → 1984, 1986 and 1987 French Open champion • 1985, 1986 and 1987 U.S. Open champion • 1989 and 1990 Australian Open champion • 1981, 1982, 1985, 1986 and 1987 Masters Grand Prix champion • ranking no. 1 for 270 weeks → 17 weeks in 1983, 15 in 1984, 17 in 1985, 52 in 1986, 52 in 1987, 37 in 1988, 48 in 1989 and 32 in 1990 |- |||1867||1930|| Great Britain|| || || 1886, 1888, 1892, 1894 Wimbledon finalist |- |||1957||–|| New Zealand|| || 19 || 1983 Wimbledon finalist • ranking in 1979 |- |||1979||–|| Croatia|| || 3 || 2006 French Open semifinalist • ranking in 2006 |- |||1980||–|| France|| || 21 || Ranking in 2011 |- |||1954||–|| Great Britain|| || || 1977 (December) Australian Open finalist |- |||1981||–|| Spain|| || 12 || 2005/2008/2011 Wimbledon quarterfinalist • 2015 US Open quarterfinalist • ranking in 2015 |- |||1906||1991|| United States|| 1964 || || 1931 United States finalist |- |||1884||1972|| Great Britain|| || || Winner of 1 Grand Slam title → 1915 Australian champion • 1911, 1923 Wimbledon semifinalist |- |||1983||–|| Chinese Taipei|| || || 2010 Wimbledon quarterfinalist |- |||1965||–|| Sweden|| || 25 || Ranking in 1987 |- |||1937||–|| Sweden|| || 3 || 1961, 1964 French Championships semifinalist • ranking in 1964 |- |||1949||–|| United States|| || 7 || 1970 Australian Open semifinalist • ranking in 1972 |- |||1886||1935|| Great Britain|| || || 1922 Wimbledon finalist, 1905 Australian semifinalist |- |||1906||1963|| Great Britain|| || || 1930, 1932 French Championships quarterfinalist |- id=M |||1935||2012|| United States|| || || 1959 Wimbledon semifinalist • 1959 Australian semifinalist |- |||1867||1905|| Great Britain|| || || Winner of 1 Grand Slam title → 1896 Wimbledon champion • 1900 Olympics silver medalist |- |||1916||2013|| United States|| 1973 || 9 || 1938 United States finalist • ranking in 1938 |- |||1980||–|| Belgium|| || 19 || 2002 Wimbledon semifinalist • ranking in 2002 |- |||1916||1960|| Switzerland|| || || 1936 French Championships quarterfinalist |- |||1965||–|| Israel|| || 18 || 1992 Australian Open quarterfinalist • ranking in 1987 |- |||1969||–|| Argentina|| || 8 || 1989 French Open quarterfinalist • ranking in 1989 |- |||1907||1978|| United States|| || || 1928, 1930, 1933, 1935, 1926 U.S. Championships quarterfinalist • 1930 Wimbledon quarterfinalist • 1933 French Championships quarterfinalist |- |||1988||–|| France|| || 22 || Ranking in 2018 |- |||1956||–|| United States|| || || 1981 U.S. Open quarterfinalist |- |||1974||–|| Spain|| || 10 || 1998 French Open semifinalist • ranking in 1998 |- |||1952||–|| Australia|| || || 1978 Australian Open finalist |- |||1849||1921|| Great Britain|| || || 1877 Wimbledon runner-up |- |||1956||–|| United States|| || || 1977 Wimbledon quarterfinalist |- |||1970||–|| United States|| || 4 || 1994 Australian Open finalist • 1999 U.S. Open finalist • ranking in 1999 |- | ||1979||–|| Chile|| || 9 || Winner of 2 Olympic gold medals ◌ 2004 Olympic gold medalist • ranking in 2004 |- |||1950||–|| Australia|| || || 1974 Australian Open quarterfinalist |- |||1963||–|| Australia|| || 15 || 1987 Australian Open semifinalist • 1993 U.S. Open semifinalist • ranking in 1993 |- |||1982||–|| France|| || 12 || Ranking in 2008 |- |||1967||–|| Japan|| || || 1995 Wimbledon quarterfinalist |- |||1958||–|| West Germany|| || 24 || Ranking in 1986 |- |||1883||1941|| Great Britain|| || || 1909/1914/1920 Wimbledon semifinalist |- |||1983||–|| Germany|| || 18 || 2004/2012 Wimbledon quarterfinalist • ranking in 2011 |- |||1956||–|| United States|| || 4 || 1980 and 1982 Wimbledon quarterfinalist • 1982 and 1984 U.S. Open quarterfinalist • ranking in 1980 |- |||1987||–|| Argentina|| || 21 || Ranking in 2015 |- |||1952||–|| United States|| || 7 || 1973 Wimbledon semifinalist • ranking in 1982 |- |||1960||–|| United States|| || 7 || 1983 Australian Open semifinalist • 1982 Wimbledon semifinalist • ranking in 1988 |- |||1959||–|| United States|| 1999 || || Winner of 7 Grand Slam titles → 1979, 1980, 1981 and 1984 U.S. Open champion • 1981, 1983 and 1984 Wimbledon champion • 1978, 1983 and 1984 Masters Grand Prix champion • ranking no. 1 for 170 weeks → 4 weeks in 1980, 23 in 1981, 45 in 1982, 26 in 1983, 37 in 1984, 35 in 1985 • ranking no. 1 for 267 weeks → 37 weeks in 1979, 52 in 1980, 41 in 1981, 48 in 1982, 52 in 1983, 37 in 1984 |- |||1966||–|| USA|| || || 1991 Australian Open semifinalist |- |||1916||1978|| Australia|| || || Winner of 1 Grand Slam title → 1927 Australian champion |- |||1929||2007|| Australia|| 1999 || 3 || Winner of 1 Grand Slam title → 1952 Australian champion • ranking in 1952 |- |||1941||1986|| United States|| 1986 || 2 || Winner of 1 Grand Slam title → 1963 Wimbledon champion • ranking in 1963 |- |||1890||1957|| United States|| 1957 || 1 || Winner of 2 Grand Slam titles → 1912 and 1913 United States champion, 1911, 1914 and 1915 finalist • 1913 Wimbledon finalist (results likely incomplete as most drawsheets are unavailable) • rated world no. 1 for 1 year, 1914 |- |||1955||2019|| Australia|| || 7 || 1980 Australian Open semifinalist • ranking in 1983 |- |||1954||–|| Australia|| || 24 || Ranking in 1986 |- |||1918||1996|| United States|| 1965 || || Winner of 2 Grand Slam titles → 1939 French champion, 1940 United States champion |- |||1942||–|| South Africa|| 1992 || || Quarterfinalist 1972 US Open |- |||1964||–|| Czechoslovakia|| || 4 || 1988 Olympic gold medalist • 1986 US Open finalist • 1989 Australian Open finalist • ranking in 1988 |- |||1974||–|| Soviet Union / Ukraine|| || 4 || 1999 French Open finalist • ranking in 1994 |- |||1996||–|| Russia || || 2 || 2020 Tour finals champion • 2019 US Open Finalist • 2021 Australian Open Finalist • 2020 US Open semifinalist • Ranking in 2021 |- |||1848||1928|| Great Britain|| || || 1889 U.S. Championships semifinalist • 1895 Wimbledon semifinalist |- |||1949||2014|| West Germany|| || 20 || Ranking in 1973 |- |||1971||–|| Brazil|| || 25 || 1999 French Open semifinalist • ranking in 1999 |- |||1981||–|| Austria|| || 8 || 2010 French Open semifinalist • ranking in 2011 |- |||1907||1987|| Czechoslovakia|| || || 1938 French Championships finalist |- |||1927||2019|| Italy|| || || 1955, 1956 French Championships semifinalist |- |||1944||–|| Soviet Union|| || || 1972 French Open semifinalist • 1972 Australian Open semifinalist |- |||1977||–|| Belarus|| || 18 || 2002 US Open quarterfinalist • ranking in 2003 |- |||1917||1986|| Yugoslavia|| || || 1938, 1946, 1949 French Championships quarterfinalist |- |||1984||–|| Argentina|| || 10 || Ranking in 2012 |- |||1986||–|| France|| || 6 || 2008 French Open semifinalist • 2016 US Open semifinalist • ranking in 2016 |- |||1980||–|| Spain|| || 22 || Ranking in 2010 |- |||1904||1976|| Australia|| || || 1930 Australian champion |- |||1946||–|| South Africa|| || || 1977 US Open quarterfinalist |- |||1920||2006|| Argentina|| || || 1953, 1954 French Championships semifinalist |- |||1896||1961|| Italy|| || || 1930 French Championships semifinalist |- |||1955||–|| Great Britain|| || 15 || Ranking in 1983 |- |||1976||–|| Spain|| || 1|| Winner of 1 Grand Slam title → 1998 French Open champion • 1998 Tour Finals finalist, 1997/2002 semifinalist • ranking world no. 1 for 2 weeks in 1999 |- |||1940||–|| Australia|| || || 1962 Wimbledon finalist |- |||1913||2016|| United States|| 1972 || 7 || 1952 U.S. finalist • ranking in 1952 |- |||1983||–|| Luxembourg|| || || 2008 U.S. Open quarterfinalist |- |||1987||–|| Great Britain|| || 1|| Winner of 3 Grand Slam titles → 2012 US Open champion •2013 and 2016 Wimbledon Champion • 2016 Tour Finals champion • winner of 2 Olympic gold medals → 2012 and 2016 Olympic gold medalist • ranking world no. 1 for 41 weeks (2016–17) |- |||1892||1970|| United States|| 1958 || || 1917/1918 U.S. champion |- |||1967||–|| Austria|| || 1 || Winner of 1 Grand Slam title → 1995 French Open champion • ranking world no. 1 for 6 weeks |- id=N |||1986||–|| Spain|| || 1 || Winner of 20 Grand Slam titles including a career Grand Slam achieved in 2010 and 2 Olympic gold medals → 2005/2006/2007/2008/2010/2011/2012/2013/2014/2017/2018/2019 French Open champion (12) • 2008/2010 Wimbledon champion (2) • 2010/2013/2017/2019 US Open champion (4) • 2009 Australian Open champion • 2010/2013 Tour Finals finalist, 2006/2007/2015 semifinalist • 2008 Olympic single gold medalist • world no. 1 for 209 weeks (2008-2020) |- |||1982||–|| Argentina|| || 3 || 2002 Wimbledon finalist • 2005 Tour Finals champion, 2006 semifinalist • ranking in 2006 |- |||1946||–|| Romania|| 1991 || 1 || Winner of 2 Grand Slam titles → 1972 US Open champion • 1973 French Open champion • 1971/1972/1973/1975 Masters champion, 1974 finalist • ranking world no. 1 for 40 weeks and for 1973 |- |||1873||1949|| United States|| || || 1895, 1896 U.S. Championships semifinalist |- |||1944||–|| Australia|| 1986 || 1|| Winner of 7 Grand Slam titles → 1967/1970/1971 Wimbledon champion • 1967/1973 US Open champion • 1973/1975 Australian Open champion • ranking world no. 1 |- |||1930||2011|| Denmark|| || || 1953/1955 Wimbledon finalist |- |||1981||–|| Finland|| || 13 || 2005 U.S. Open quarterfinalist • 2006 Wimbledon quarterfinalist • 2008 Australian Open quarterfinalist • ranking in 2006 |- |||1886||1932|| United States|| || || 1917 U.S. Championships finalist |- |||1873||1937|| Great Britain|| || || 1897 U.S. Championships finalist |- |||1989||–|| Japan|| || 4 || 2014 US Open finalist • 2012/2015/2016/2019 Australian Open quarterfinalist • 2015/2017/2019 French Open quartefinalist • 2018/2019 Wimbledon quarterfinalist • 2014,2016 Tour finals semifinalist • 2016 Olympic bronze medalist • ranking in 2015 |- |||1960||–|| France|| 2005 || 3 || Winner of 1 Grand Slam title → 1983 French Open champion • ranking in 1986 |- |||1976||–|| Sweden|| || 2 || 2000 French Open finalist • Ranking in 2000 |- |Cameron Norrie |1996 |_ | | |12 |Ranking in 2021 |- |||1899||1956|| South Africa|| || || 1921 Wimbledon finalist |- |||1965||–|| Czechoslovakia / Czech Republic|| || || 1994 US Open semifinalist • 1987/1993 French Open quarterfinalist |- |||1975||–|| Czech Republic|| || 5 || 2002 Australian Open semifinalist • ranking in 2002 |- |||1910||1991|| / Germany|| 2006 || |- |||1963||–|| Sweden|| || 7 || Ranking in 1986 |- |- id=O |||1944||–|| Netherlands|| || || 1968 US Open finalist, 1971 semifinalist • 1969 French Open semifinalist, 1973 quarterfinalist • 1971 Australian Open semifinalist, 1970 quarterfinalist • 1978 Wimbledon semifinalist, 1968/1969/1975/1979 quarterfinalist |- |||1936||2020|| Peru / United States|| 1987 || || Winner of 2 Grand Slam titles → 1959 Australian champion • 1959 Wimbledon champion • 1959 U.S. finalist |- |||1997||–|| United States|| || 23 || Ranking in 2021 |- |||1949||–||/ Spain|| 2012 || || Winner of 1 Grand Slam title → 1975 US Open champion, 1976/1977 quarterfinalist • 1974 French Open finalist, 1972 semifinalist, 1976/1978 quarterfinalist • 1972 Wimbledon semifinalist • 1968 Australian Open quarterfinalist • 1976 Masters champion |- |||1945||–|| United States|| || || 1971 U.S. Open quarterfinalist |- |||1938||1969|| Mexico|| 1979 || 1|| Winner of 1 Grand Slam title → 1963 U.S. Open champion • ranking no. 1 in 1963 |- id=P |||1921||1986|| Australia|| || || Winner of 1 Grand Slam title → 1946 Australian champion, 1947 finalist • 1947 Wimbledon semifinalist, 1946 quarterfinalist |- |||1989||–|| France|| || 18 || Ranking in 2016 |- |||1912||1994|| Yugoslavia|| || || 1938 French Championships semifinalist |- |||1936||–|| Mexico|| || || 1965 U.S. Championships quarterfinalist |- |||1950||–|| Italy|| || 4 || Winner of 1 Grand Slam title → 1976 French Open champion • 1976 Davis Cup champion • ranking in 1976 |- |||1870||1952|| United States|| || || 1899 U.S. Championships finalist |- |||1881||1946|| Great Britain|| || || Winner of 1 Grand Slam title → 1912 Australian champion |- |||1916||1997|| United States|| 1966 || || Winner of 4 Grand Slam titles → 1944, 1945 U.S. champion, 1948, 1949 French champion • 1937 Wimbledon semifinalist |- |||1847||1928|| Great Britain|| || || 1879 Wimbledon All-Comers semifinalist |- |||1947||–|| New Zealand|| || || 1973 Australian Open finalist |- |||1944||–|| United States|| 2013 || || 1965 U.S. quarterfinalist • 1976 Wimbledon quarterfinalist |- |||1962||–|| United States|| || 18 || Ranking in 1987 |- |||1895||1967|| Australia|| 1989 || || Rated co-world no. 1 in 1919 with "Little Bill" Johnston |- |||1949||–|| Rhodesia|| || 24 || Ranking in 1974 |- |||1924||2021|| United States|| 1977 || 1|| Winner of 2 Grand Slam titles → 1950 French champion • 1950 Wimbledon champion • ranking in 1950 |- |||1974||–|| Romania|| || 13 || 2002 French Open quarterfinalist • ranking in 2004 |- |||1955||–|| Paraguay|| || 9 || Ranking in 1980 |- |||1879||1967|| United States|| 1966 || || 1915 United States semifinalist |- |||1990||–|| Argentina|| || 20 || Ranking in 2019 • 2019 Wimbledon quarterfinalist |- |||1917||1974|| France|| || || 1946 Wimbledon quarterfinalist |- |||1969||–|| Argentina|| || 13 || Ranking in 1988 |- |||1963||–|| Sweden|| || 10 || 1986 French Open finalist • 1990 Australian Open quarterfinalist • ranking in 1986 |- |||1909||1995|| Great Britain|| 1975 || 1|| Winner of 8 Grand Slam titles, including a Career Slam → 1933/1934/1936 U.S. champion • 1934/1935/1936 Wimbledon champion • 1934 Australian champion • 1935 French champion • rated world no. 1 for 5 years |- |||1916||1984|| France||2016|| || Winner of 1 Grand Slam title → 1946 Wimbledon champion |- |||1953||–|| United States|| || 19 || 1978/1981/1982 Australian Open semifinalist • ranking in 1983 |- |||1976||–|| Australia|| || 8 || 1998 US Open finalist • 2003 Wimbledon finalist • ranking in 1999 |- |||1937||–|| Australia|| || || 1968 Australian Championships semifinalist |- |||1933||–|| Italy|| 1986 || 3 || Winner of 2 Grand Slam titles → 1959 and 1960 French Open champion • ranking in 1959 |- |||1939||–|| Yugoslavia|| || || 1973 French Open finalist |- |||1869||1942|| Ireland|| || || Winner of 2 Grand Slam titles → 1893, 1894 Wimbledon champion |- |||1963||–|| Czechoslovakia|| || 21 || Ranking in 1985 |- |||1954||–|| West Germany|| || 23 || Ranking in 1979 |- |||1969||–|| France|| || || 1993 US Open finalist • 1997 Wimbledon finalist • 1998 French Open semifinalist |- |||1947||–|| West Germany|| || || 1974 French Open quarterfinalist |- |||1976||–|| Germany|| || || 2000/2003 Wimbledon quarterfinalist |- |||1990||-|| Canada|| || 25 || 2015 Wimbledon quarterfinalist • ranking in 2014 |- |||1994||-|| France|| || 10 || 2016 Wimbledon quarterfinalist • 2016 US Open quarterfinalist • ranking in 2018 |- |||1964||–|| Croatia|| || 16 || 1991 Australian Open quarterfinalist • 1993 French Open quarterfinalist • ranking in 1991 |- |||1949||–|| France|| || 23 || Ranking in 1973 • 1973 French Open finalist |- |||1978||–|| Argentina|| || 9 || 2005 French Open finalist • ranking in 2005 |- |||1913||1985|| Yugoslavia|| || || 1938 French Championships semifinalist • 1938, 1939 Wimbledon semifinalist |- |||1959||–|| United States|| || 21 || Ranking in 1980 |- id=Q |||1987||–|| United States|| || 11 || 2017 Wimbledon semifinalist • 2016 Wimbledon quarterfinalist • 2017 US Open quarterfinalist • Ranking in 2018 |- |||1913||1991|| Australia|| 1984 || || Winner of 3 Grand Slam titles → 1936/1940/1948 Australian champion |- id=R |||1972||–|| Australia|| 2006 || 1|| Winner of 2 Grand Slam titles → 1997/1998 U.S. Open champion • 2000/2001 Wimbledon finalist, 1999 semifinalist • 1997 French Open semifinalist • 2001 Australian Open semifinalist • ranking world no. 1 for 1 week |- |||1942||2020|| United States|| 1987 || 5 || 1966 Wimbledon finalist • ranking in 1966 |- |||1953||–|| Mexico|| || 4 || Ranking in 1976 |- |||1988||–|| Spain|| || 17 || Ranking in 2017 – 2016 French Open quarterfinalist |- |||1990||–|| Canada|| || 3 || Ranking in 2016 - 2016 Wimbledon finalist • 2016 Tour Finals semifinalist |- |||1895||1962|| South Africa|| || || 1924 Wimbledon semifinalist • 1920 Olympic gold medalist |- |||1965||–|| United States|| || 20 || Ranking in 1991 |- |||1958||–|| United States|| || || 1980 Australian Open quarterfinalist |- |||1861||1899|| Great Britain|| 1983 || || Winner of 1 Grand Slam title → 1888 Wimbledon champion |- |||1861||1904|| Great Britain|| 1983 || || Winner of 7 Grand Slam titles → 1881/1882/1883/1884/1885/1886/1889 Wimbledon champion |- |||c.1921||1992|| United States|| || || 1942 U.S. National Championships quarterfinalist |- |||1903||1959|| United States|| 1961 || || 1924 Olympic gold medalist |- |||1946||–|| United States|| || 16 || 1970/1972 US Open semifinalist • ranking in 1973 |- |||1918||1995|| United States|| 1967 || || Winner of 3 Grand Slam titles → 1939 Wimbledon champion, 1939, 1941 U.S. champion • ranked world no. 1 for 3 years |- |||1941||–|| United States|| || 11 || 1971 Australian Open quarterfinalist • 1971 US Open quarterfinalist - ranking in 1974 |- |||1975||–|| Chile|| || 1|| 1998 Australian Open finalist • ranking world no. 1 for 6 weeks in 1998 |- |||1877||1959|| Great Britain|| || || 1903(Ch), 1904(Ch), and 1906(Ch) Wimbledon finalist |- |||1870||1955|| Great Britain|| || || 1902, 1903, 1904, and 1909(Ch) Wimbledon finalist • 1908 Olympic outdoor gold medalist |- |||1982||–|| Spain|| || 5 || 2007 Australian Open quarterfinalist •2003/2005/2007/2009/2013 French Open quarterfinalist • 2013 US Open quarterfinalist • ranking in 2006 |- |||1945||–|| Australia|| 1986 || 2 || 1966 French champion • ranking in 1969 |- |||1981||–|| Belgium|| || 24 || Ranking in 2005 |- |||1982||–|| United States||2017|| 1 || Winner of 1 Grand Slam title → 2003 US Open champion • 2003/2004/2007 Masters semifinalist • ranking world no. 1 for 13 weeks in 2003-2004 |- |||1957||–|| France|| || || 1983 French Open semifinalist |- |||1930||2017|| Australia|| 2001 || 3 || Winner of 2 Grand Slam titles → 1954 Australian champion • 1958 French champion • ranking in 1958 |- |||1934||–|| Australia|| 1980 || 1 || Winner of 8 Grand Slam titles → 1953/1955/1971(O)/1972(O) Australian (Open) champion • 1953/1968(O) French (Open) champion • 1956/1970(O) US (Open) champion • ranking world no.1 in 1961, 1962 and 1963 |- |||1970||–|| Switzerland|| || 9 || 1992 Olympic gold medalist • 1996 French Open semifinalist • ranking in 1995 |- |||1965||–|| United States|| || 13 || 1988 US Open quarterfinalist • ranking in 1991 |- |||1997||–|| Russia|| || 7 || 2017/2020 U.S. Open quarterfinalist • 2020 French Open quarterfinalist • 2021 Australian Open quarterfinalist • Ranking in 2021 |- |||1946||–|| Australia|| || || 1969/1975 Australian Open semifinalist |- |||1973||–|| Great Britain|| || 4 || 1997 US Open finalist • ranking in 1997 |- |||1916||1977|| Argentina|| || || 1942 and 1945 U.S. National Championships quarterfinalist |- |||1998||–|| Norway|| || 11 || Ranking in 2021 |- id=S |||1978||–|| Brazil|| || || 2002 Wimbledon quarterfinalist |- |||1956||–|| United States|| || 14 || Ranking in 1980 |- |||1980||–|| Russia||2016|| 1|| Winner of 2 Grand Slam titles → 2000 US Open champion • 2005 Australian Open champion • 2000/2004 Masters semifinalist • ranking world no. 1 for 9 weeks |- |||1971||–|| United States|| 2007 || 1 || Winner of 14 Grand Slam titles → 1990/1993/1995/1996/2002 US Open champion • 1993/1994/1995/1997/1998/1999/2000 Wimbledon champion • 1994/1997 Australian Open champion • 1991/1994/1996/1997/1999 Masters champion (record; shared with Ivan Lendl) • ranking world no. 1 for 286 weeks |- |||1965||–|| Spain|| || 7 || 1988 French Open quarterfinalist • 1988 U.S. Open quarterfinalist • ranking in 1990 |- |||1968||–|| Spain|| || 23 || 1991/1996 U.S. Open quarterfinalist • |
itself, is not always clear: philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection". that between philosophy and empirical science. Some argue that philosophy is distinct from science in that its questions cannot be answered empirically, that is, by observation or experiment. Some analytical philosophers argue that all meaningful empirical questions are to be answered by science, not philosophy. However, some schools of contemporary philosophy such as the pragmatists and naturalistic epistemologists argue that philosophy should be linked to science and should be scientific in the broad sense of that term, "preferring to see philosophical reflection as continuous with the best practice of any field of intellectual enquiry". that between philosophy and religion. Some argue that philosophy is distinct from religion in that it allows no place for faith or revelation: that philosophy does not try to answer questions by appeal to revelation, myth or religious knowledge of any kind, but uses reason, without reference to sensible observation and experiments". However, philosophers and theologians such as Thomas Aquinas and Peter Damian have argued that philosophy is the "handmaiden of theology" (ancilla theologiae). Methods Philosophical method (or philosophical methodology) is the study of how to do philosophy. A common view among philosophers is that philosophy is distinguished by the ways that philosophers follow in addressing philosophical questions. There is not just one method that philosophers use to answer philosophical questions. Recently, some philosophers have cast doubt about intuition as a basic tool in philosophical inquiry, from Socrates up to contemporary philosophy of language. In Rethinking Intuition various thinkers discard intuition as a valid source of knowledge and thereby call into question 'a priori' philosophy. Experimental philosophy is a form of philosophical inquiry that makes at least partial use of empirical research—especially opinion polling—in order to address persistent philosophical questions. This is in contrast with the methods found in analytic philosophy, whereby some say a philosopher will sometimes begin by appealing to his or her intuitions on an issue and then form an argument with those intuitions as premises. However, disagreement about what experimental philosophy can accomplish is widespread and several philosophers have offered criticisms. One claim is that the empirical data gathered by experimental philosophers can have an indirect effect on philosophical questions by allowing for a better understanding of the underlying psychological processes which lead to philosophical intuitions. Some analytic philosophers like Timothy Williamson have rejected such a move against 'armchair' philosophy–i.e., philosophical inquiry that is undergirded by intuition–by construing 'intuition' (which they believe to be a misnomer) as merely referring to common cognitive faculties: If one is calling into question 'intuition', one is, they would say, harboring a skeptical attitude towards common cognitive faculties–a consequence that seems philosophically unappealing. For Williamson, instances of intuition are instances of our cognitive faculties processing counterfactuals (or subjunctive conditionals) that are specific to the thought experiment or example in question. Progress A prominent question in metaphilosophy is that of whether or not philosophical progress occurs and more so, whether such progress in philosophy is even possible. It has even been disputed, most notably by Ludwig Wittgenstein, whether genuine philosophical problems actually exist. The opposite has also been claimed, for example by Karl Popper, who held that such problems do exist, that they are solvable, and that he had actually found definite solutions to some of them. David Chalmers divides inquiry into philosophical progress in metaphilosophy into three questions. The Existence Question: is there progress in philosophy? The Comparison Question: is there as much progress in philosophy as in science? The Explanation Question: why isn't there more progress in philosophy? See also Antiphilosophy Metatheory Meta-knowledge Metaphysics Metapolitics Metasemantics Non-philosophy Unsolved problems in philosophy Theory of everything (philosophy) References Further reading Double R., (1996) Metaphilosophy and Free Will, Oxford University Press, USA, , Ducasse, C.J., (1941) Philosophy as a Science: Its Matter and Its Method Lazerowitz M., (1964) Studies in Metaphilosphy, London: Routledge Overgaard, S, Gilbert, P., Burwood, S. (2013) An Introduction to Metaphilosophy, Cambridge: Cambridge University Press Rescher N., (2006), Philosophical Dialectics, an Essay on Metaphilosophy, Albany: State University of New York Press Rescher, Nicholas (2001). Philosophical Reasoning. A Study in the Methodology of Philosophizing. Blackwell. Williamson T., (2007) The Philosophy of Philosophy, London: Blackwell Wittgenstein Ludwig, Tractatus Logico-Philosophicus, trans. David Pears and Brian McGuinness (1961), Routledge, hardcover: , 1974 paperback: , 2001 hardcover: , 2001 paperback: ; ** Philosophische Untersuchungen (1953) or Philosophical | philosophy Anti-philosophies Philosophy and assertion Philosophy and exposition Philosophy and style Philosophy as literature Literature as philosophy Philosophical beauty Philosophy as science Philosophy and related fields and activities Philosophy and argument Philosophy and wisdom Philosophy and metaphilosophy Philosophy and the folk Philosophy and 'primitive' life Philosophy and philosophers Philosophy and pedagogy Aims Some philosophers (e.g. existentialists, pragmatists) think philosophy is ultimately a practical discipline that should help us lead meaningful lives by showing us who we are, how we relate to the world around us and what we should do. Others (e.g. analytic philosophers) see philosophy as a technical, formal, and entirely theoretical discipline, with goals such as "the disinterested pursuit of knowledge for its own sake". Other proposed goals of philosophy include discovering the absolutely fundamental reason of everything it investigates, making explicit the nature and significance of ordinary and scientific beliefs, and unifying and transcending the insights given by science and religion. Others proposed that philosophy is a complex discipline because it has 4 or 6 different dimensions. Boundaries Defining philosophy and its boundaries is itself problematic; Nigel Warburton has called it "notoriously difficult". There is no straightforward definition, and most interesting definitions are controversial. As Bertrand Russell wrote: While there is some agreement that philosophy involves general or fundamental topics, there is no clear agreement about a series of demarcation issues, including: that between first-order and second-order investigations. Some authors say that philosophical inquiry is second-order, having concepts, theories and presupposition as its subject matter; that it is "thinking about thinking", of a "generally second-order character"; that philosophers study, rather than use, the concepts that structure our thinking. However, the Oxford Dictionary of Philosophy warns that "the borderline between such 'second-order' reflection, and ways of practicing the first-order discipline itself, is not always clear: philosophical problems may be tamed by the advance of a discipline, and the conduct of a discipline may be swayed by philosophical reflection". that between philosophy and empirical science. Some argue that philosophy is distinct from science in that its questions cannot be answered empirically, that is, by observation or experiment. Some analytical philosophers argue that all meaningful empirical questions are to be answered by science, not philosophy. However, some schools of contemporary philosophy such as the pragmatists and naturalistic epistemologists argue that philosophy should be linked to science and should be scientific in the broad sense of that term, "preferring to see philosophical reflection as continuous with the best practice of any field of intellectual enquiry". that between philosophy and religion. Some argue that philosophy is distinct from religion in that it allows no place for faith or revelation: that philosophy does not try to answer questions by appeal to revelation, myth or religious knowledge of any kind, but uses reason, without reference to sensible observation and experiments". However, philosophers and theologians such as Thomas Aquinas and Peter Damian have argued that philosophy is the "handmaiden of theology" (ancilla theologiae). Methods Philosophical method (or philosophical methodology) is the study of how to do philosophy. A common view among philosophers is that philosophy is distinguished by the ways that philosophers follow in addressing philosophical questions. There is not just one method that philosophers use to answer philosophical questions. Recently, some philosophers have cast doubt about intuition as a basic tool in philosophical inquiry, from Socrates up to contemporary philosophy of language. In Rethinking Intuition various thinkers discard intuition as a valid source of knowledge and thereby call into question 'a priori' philosophy. Experimental philosophy is a form of philosophical inquiry that makes at least partial use of empirical research—especially opinion polling—in order to address persistent philosophical questions. This is in contrast with the methods found in analytic philosophy, whereby some say a philosopher will sometimes begin by appealing to his or her intuitions on an issue and then form an argument with those intuitions as premises. However, disagreement about what experimental philosophy can accomplish is widespread and several philosophers have offered criticisms. One claim is that the empirical data gathered by experimental philosophers can have an indirect effect on philosophical questions by allowing for a better understanding of the underlying psychological processes which lead to philosophical intuitions. Some analytic philosophers like Timothy Williamson have rejected such a move against 'armchair' philosophy–i.e., philosophical inquiry that is undergirded by intuition–by construing 'intuition' (which they believe to be a misnomer) as merely referring to common cognitive faculties: If one is calling into question 'intuition', one is, they would say, harboring a skeptical attitude towards common cognitive faculties–a consequence that seems philosophically unappealing. For Williamson, instances of intuition are instances of our cognitive faculties processing counterfactuals (or subjunctive conditionals) that are specific to the thought experiment or example in question. Progress A prominent question in metaphilosophy is that of whether or not philosophical progress occurs and more so, whether such progress in philosophy is even possible. It has even been disputed, most notably by Ludwig Wittgenstein, whether genuine philosophical problems actually exist. The opposite has also been claimed, for example by Karl Popper, who held that such problems do exist, that they are solvable, and that he had actually found definite solutions to some of them. David Chalmers divides inquiry into philosophical progress in metaphilosophy into three questions. The Existence Question: is there progress in philosophy? The Comparison Question: is there as much progress in philosophy as in science? The Explanation Question: why isn't there more progress in philosophy? See also Antiphilosophy Metatheory Meta-knowledge Metaphysics Metapolitics Metasemantics Non-philosophy Unsolved problems in philosophy Theory of everything (philosophy) References Further reading Double R., (1996) Metaphilosophy and Free Will, Oxford University Press, USA, , Ducasse, C.J., (1941) Philosophy as a |
Artist models built in the early 1920s under the supervision of Gibson acoustician Lloyd Loar. Original Loar-signed instruments are sought after and extremely valuable. Other makers from the Loar period and earlier include Lyon and Healy, Vega and Larson Brothers. Pressed archtops The ideal for archtops has been solid pieces of wood carved into the right shape. However, another archtop exists, the top made of laminated wood or thin sheets of solid wood, pressed into the arched shape. These have become increasingly common in the world of internationally constructed musical instruments in the 21st century. The pressed-top instruments are made to appear the same as the carved-top instruments; however, the pressed tops do not sound the same as the carved-wood tops. Carved-wood tops when carved to the ideal thickness, produce the sound which consumers expect. Not carving them correctly can lead to a dull sound. The sound of a carved-wood instrument changes the longer it is played, and older instruments are sought out for their rich sound. Laminated-wood presstops are less resonant than carved wood, the wood and glue vibrating differently than wood grain. Presstops made of solid wood have the wood's natural grain compressed, creating a sound that is not as full as on a well-made, carved-top mandolin. Flatback Flatback mandolins use a thin sheet of wood with bracing for the back, as a guitar uses, rather than the bowl of the bowlback or the arched back of the carved mandolins. Like the bowlback, the flatback has a round sound hole. This has been sometimes modified to an elongated hole, called a D-hole. The body has a rounded almond shape with flat or sometimes canted soundboard. The type was developed in Europe in the 1850s. The French and Germans called it a Portuguese mandolin, although they also developed it locally. The Germans used it in Wandervogel. The bandolim is commonly used wherever the Spanish and Portuguese took it: in South America, in Brazil (Choro) and in the Philippines. In the early 1970s English luthier Stefan Sobell developed a large-bodied, flat-backed mandolin with a carved soundboard, based on his own cittern design; this is often called a 'Celtic' mandolin. American forms include the Army-Navy mandolin, the flatiron and the pancake mandolins. Tone The tone of the flatback is described as warm or mellow, suitable for folk music and smaller audiences. The instrument sound does not punch through the other players' sound like a carved top does. Double top, double back The double top is a feature that luthiers are experimenting with in the 21st century, to get better sound. However, mandolinists and luthiers have been experimenting with them since at least the early 1900s. Back in the early 1900s, mandolinist Ginislao Paris approached Luigi Embergher to build custom mandolins. The sticker inside one of the four surviving instruments indicates the build was called after him, the Sistema Ginislao Paris). Paris' round-back double-top mandolins use a false back below the soundboard to create a second hollow space within the instrument. Modern mandolinists such as Joseph Brent and Avi Avital use instruments customized, either by the luthier's choice or at the request of the player. Joseph Brent's mandolin, made by Brian Dean also uses what Brent calls a false back. Brent's mandolin was the luthier's solution to Brent's request for a loud mandolin in which the wood was clearly audible, with less metallic sound from the strings. The type used by Avital is variation of the flatback, with a double top that encloses a resonating chamber, sound holes on the side, and a convex back. It is made by one manufacturer in Israel, luthier Arik Kerman. Other players of Kerman mandolins include Alon Sariel, Jacob Reuven, and Tom Cohen. Others Mandolinetto Other American-made variants include the mandolinetto or Howe-Orme guitar-shaped mandolin (manufactured by the Elias Howe Company between 1897 and roughly 1920), which featured a cylindrical bulge along the top from fingerboard end to tailpiece and the Vega mando-lute (more commonly called a cylinder-back mandolin manufactured by the Vega Company between 1913 and roughly 1927), which had a similar longitudinal bulge but on the back rather than the front of the instrument. Mandolin-banjo An instrument with a mandolin neck paired with a banjo-style body was patented by Benjamin Bradbury of Brooklyn in 1882 and given the name banjolin by John Farris in 1885. Today banjolin is sometimes reserved to describe an instrument with four strings, while the version with the four courses of double strings is called a mandolin-banjo. Resonator mandolin A resonator mandolin or "resophonic mandolin" is a mandolin whose sound is produced by one or more metal cones (resonators) instead of the customary wooden soundboard (mandolin top/face). Historic brands include Dobro and National. Electric mandolin As with almost every other contemporary chordophone, another modern variant is the electric mandolin. These mandolins can have four or five individual or double courses of strings. They were developed in the early 1930s, contemporaneous with the development of the electric guitar. They come in solid body and acoustic electric forms. Specific instruments have been designed to overcome the mandolin's rapid decay with its plucked notes. Fender released a model in 1992 with an additional string (a high A, above the E string), a tremolo bridge and extra humbucker pickup (total of two). The result was an instrument capable of playing heavy metal style guitar riffs or violin-like passages with sustained notes that can be adjusted as with an electric guitar. Playing traditions worldwide The international repertoire of music for mandolin is almost unlimited, and musicians use it to play various types of music. This is especially true of violin music, since the mandolin has the same tuning as the violin. Following its invention and early development in Italy the mandolin spread throughout the European continent. The instrument was primarily used in a classical tradition with Mandolin orchestras, so-called Estudiantinas or in Germany Zupforchestern appearing in many cities. Following this continental popularity of the mandolin family local traditions appeared outside Europe in the Americas and in Japan. Travelling mandolin virtuosi like Carlo Curti, Giuseppe Pettine, Raffaele Calace and Silvio Ranieri contributed to the mandolin becoming a "fad" instrument in the early 20th century. This "mandolin craze" was fading by the 1930s, but just as this practice was falling into disuse, the mandolin found a new niche in American country, old-time music, bluegrass and folk music. More recently, the Baroque and Classical mandolin repertory and styles have benefited from the raised awareness of and interest in Early music, with media attention to classical players such as Israeli Avi Avital, Italian Carlo Aonzo and American Joseph Brent. In India, the mandolin is played in classical Carnatic music. The musician U. Srinivas was perhaps the greatest mandolin player in this style. Lauded across the world for his virtuosity with the instrument, he died young. Notable literature Art or "classical" music The tradition of so-called "classical music" for the mandolin has been somewhat spotty, due to its being widely perceived as a "folk" instrument. Significant composers did write music specifically for the mandolin, but few large works were composed for it by the most widely regarded composers. The total number of these works is rather small in comparison to—say—those composed for violin. One result of this dearth being that there were few positions for mandolinists in regular orchestras. To fill this gap in the literature, mandolin orchestras have traditionally played many arrangements of music written for regular orchestras or other ensembles. Some players have sought out contemporary composers to solicit new works. Furthermore, of the works that have been written for mandolin from the 18th century onward, many have been lost or forgotten. Some of these await discovery in museums and libraries and archives. One example of rediscovered 18th-century music for mandolin and ensembles with mandolins is the Gimo collection, collected in the first half of 1762 by Jean Lefebure. Lefebure collected the music in Italy, and it was forgotten until manuscripts were rediscovered. Vivaldi created some concertos for mandolinos and orchestra: one for 4-chord mandolino, string bass & continuo in C major, (RV 425), and one for two 5-chord mandolinos, bass strings & continuo in G major, (RV 532), and concerto for two mandolins, 2 violons "in Tromba"—2 flûtes à bec, 2 salmoe, 2 théorbes, violoncelle, cordes et basse continuein in C major (p. 16). Beethoven composed mandolin music and enjoyed playing the mandolin. His 4 small pieces date from 1796: Sonatine WoO 43a; Adagio ma non troppo WoO 43b; Sonatine WoO 44a and Andante con Variazioni WoO 44b. The opera Don Giovanni by Mozart (1787) includes mandolin parts, including the accompaniment to the famous aria Deh vieni alla finestra, and Verdi's opera Otello calls for guzla accompaniment in the aria Dove guardi splendono raggi, but the part is commonly performed on mandolin. Gustav Mahler used the mandolin in his Symphony No. 7, Symphony No. 8 and Das Lied von der Erde. Parts for mandolin are included in works by Schoenberg (Variations Op. 31), Stravinsky (Agon), Prokofiev (Romeo and Juliet) and Webern (opus Parts 10) Some 20th-century composers also used the mandolin as their instrument of choice (amongst these are: Schoenberg, Webern, Stravinsky and Prokofiev). Among the most important European mandolin composers of the 20th century are Raffaele Calace (composer, performer and luthier) and Giuseppe Anedda (virtuoso concert pianist and professor of the first chair of the Conservatory of Italian Mandolin, Padua, 1975). Today representatives of Italian classical music and Italian classical-contemporary music include Ugo Orlandi, Carlo Aonzo, Dorina Frati, Mauro Squillante and Duilio Galfetti. Japanese composers also produced orchestral music for mandolin in the 20th century, but these are not well known outside Japan. Traditional mandolin orchestras remain especially popular in Japan and Germany, but also exist throughout the United States, Europe and the rest of the world. They perform works composed for mandolin family instruments, or re-orchestrations of traditional pieces. The structure of a contemporary traditional mandolin orchestra consists of: first and second mandolins, mandolas (either octave mandolas, tuned an octave below the mandolin, or tenor mandolas, tuned like the viola), mandocellos (tuned like the cello), and bass instruments (conventional string bass or, rarely, mandobasses). Smaller ensembles, such as quartets composed of two mandolins, mandola, and mandocello, may also be found. Unaccompanied solo Niccolò Paganini Minuet Silvio Ranieri Variations on a Theme by Haydn Song of summer Raffaele Calace Prelude No. 1 Prelude No. 2 Prelude No. 3 Prelude No. 5 Prelude No. 10 Prelude No. 11 Prelude No. 14 Prelude No. 15 Large prelude Collard Sylvia Minuet of rose Ugo Bottacchiarri I have stood on the banks Heinrich Koniettsuni Partita No. 1, etc. Herbert Baumann Sonatine, etc. Siegfried Behrend Sense – structure John Craton The Gray Wolf Perpetuum Mobile Variations from Der Fluyten Lust-hof Sakutarō Hagiwara Hataoriru maiden Takei Shusei Spring to go Seiichi Suzuki Variations on Schubert lullaby City of Elm Variations on Kojonotsuki of subject matter Gilad Hochman Two Episodes for solo mandolin Jiro Nakano "Spring has come" Variations Prayer Fantasia second No. Serenata Beautiful my child and where Prayer of the evening Variations on September Affair of the subject matter Makino YukariTaka Spring snow of ballads Jo Kondo In early spring Takashi Kubota Nocturne Etude Fantasia first No. Yasuo Kuwahara Moon and mountain witch Impromptu Winter Light Mukyu motion Jon-gara Silent door Victor Kioulaphides Accompaniment with solo Ludwig van Beethoven Sonatine in C minor, WoO 43a Adagio in E major WoO 43b Sonatine in C major WoO 44a Andante and Variations in D major WoO 44b John Craton Dioces aztecas The Legend of Princess Noccalula Giovanni Hoffmann 4 Quartet for Mandolin, Violin, Viola, and Lute 4 Divertimenti for Mandolin, Violin & B.c. Johann Nepomuk Hummel Sonata in C major Op. 35 Vittorio Monti Csárdás Carlo Munier Spanish Capriccio Mazurka for concert Waltz for concert Bizaria Aria Varia data Mandolin Concerto No. 1 Raffaele Calace Mandolin Concerto No. 1 Mandolin Concerto No. 2 Mukyu motion Tarantella Song of Nostalgia Elegy Mazurka for concert Silvio Ranieri Warsaw of memories Enrico Marcelli Gypsy style Capriccio Fantastic Waltz Mukyu motion Polonaise for concert Hans Gál Divertimento for mandolin and harp Such as a duo for the mandolin and guitar Norbert Shupuronguru Serenade for mandolin and guitar Franco Marugora Grand Sonata for mandolin and guitar Kurt Schwaen Slovenia wind Dances such as Dietrich Erdmann Sonatine Mari Takano Light of silence Rikuya Terashima Sonata for mandolin and piano (2002) Duo and musical ensemble A duet or duo is a musical composition for two performers in which the performers have equal importance to the piece. A musical ensemble with more than two solo instruments or voices is called trio, quartet, quintet, sextet, septet, octet, etc. Ella Von Adajewska-Schultz (1846-1926) Venezuelan Serenade Valentine Abt (1873-1942) In Venice Waters Charles Acton Chants Des Gondoliers Hermann Ambrosius Duo Emanuele Barbella Sonata in D major for Mandolin and Basso Continuo Ignazio Bitelli (c. 1880–1956) L'Albero di Natale, pastorale for mandolin & guitar Il Gondoliere, valse for 2 mandolins & guitar Costantino Bertucci Il Carnevale Di Venezia Con Variazioni Pietro Gaetano Boni (1686-1741) Sonate pour mandoline en la, Op. 2 n° 1 Sonate pour mandoline en ré mineur, Op. 2 n° 2 Sonate pour mandoline en ré, Op. 2 n° 9 Antonio Del Buono "In Gondola" Serenata Veneziana "Ai Mandolnisti Di Venezia Raffaele Calace Barcarola Op. 100 Per Chitarra Barcarola Op. 116 Per Liuto "A Mio Figlio Peppino" Gioacchino Cocchi Sinfonia for 2 Mandolins & Continuo, (Gimo 76) Jules Cottin Au Fil De L'Eau John Craton Charon Crossing the Styx (mandolin & double bass) Four Whimsies (mandolin & octave mandolin) Les gravures de Gustave Doré (mandolin & guitar) Six Pantomimes for Two Mandolins Sonatina No. 3 for Mandolin & Violin Hans Gál Op. 59a Sonatina for 2 mandolins (1952) Giovani Battista Gervasio Sonata for Mandolin & Continuo (Gimo 141) Sonata per camera (Gimo 143) Sinfonia for 2 Mandolins & Continuo, (Gimo 149) Trio for 2 Mandolins & Continuo, (Gimo 150) Sonata in D major for Mandolin and Basso Continuo Sonata in G major for Mandolin and Basso Continuo Giuseppe Giuliano Sonata in D major for Mandolin and Basso Continuo Geoffrey Gordon Interiors of a Courtyard (mandolin & guitar) Addiego Guerra Sonata in G major for Mandolin and Basso Continuo Positive Hattori Concerto for two mandolin and piano Sean Hickey Mandolin Canons (mandolin & guitar) Giovanni Hoffmann 3 Duets for Mandolin and Violin Serenade for Viola and Mandolin Tyler Kaier Den lille Havfrue (mandolin & guitar) Peter Machajdík Mit den Augen eines Falken for mandolin & guitar (2016) Giovanni Battista Maldura Barcarola Veneziana Di Mendelssohn Edward Mezzacapo (1832-1898) Le Chant Du Gondolier Heinrich Molbe (1835–1915) Gondolata Op. 74 Per Mandolino, Clarinetto E Pianoforte Carlo Munier (1859-1911) "In Gondola" Ricordi di Mendelssohn Notturno Veneziano Per | made mandolin-basses. The relatively rare eight-string mandobass, or "tremolo-bass", also exists, with double courses like the rest of the mandolin family, and is tuned either G1–D2–A2–E3, two octaves lower than the mandolin, or C1–G1–D2–A2, two octaves below the mandola. Variations Bowlback Bowlback mandolins (also known as roundbacks), are used worldwide. They are most commonly manufactured in Europe, where the long history of mandolin development has created local styles. However, Japanese luthiers also make them. Owing to the shape and to the common construction from wood strips of alternating colors, in the United States these are sometimes colloquially referred to as the "potato bug" or "potato beetle" mandolin. Neapolitan and Roman styles The Neapolitan style has an almond-shaped body resembling a bowl, constructed from curved strips of wood. It usually has a bent sound table, canted in two planes with the design to take the tension of the eight metal strings arranged in four courses. A hardwood fingerboard sits on top of or is flush with the sound table. Very old instruments may use wooden tuning pegs, while newer instruments tend to use geared metal tuners. The bridge is a movable length of hardwood. A pickguard is glued below the sound hole under the strings. European roundbacks commonly use a scale instead of the common on archtop Mandolins. Intertwined with the Neapolitan style is the Roman style mandolin, which has influenced it. The Roman mandolin had a fingerboard that was more curved and narrow. The fingerboard was lengthened over the sound hole for the E strings, the high pitched strings. The shape of the back of the neck was different, less rounded with an edge, the bridge was curved making the G strings higher. The Roman mandolin had mechanical tuning gears before the Neapolitan. Manufacturers of Neapolitan-style mandolins Prominent Italian manufacturers include Vinaccia (Naples), Embergher (Rome) and Calace (Naples). Other modern manufacturers include Lorenzo Lippi (Milan), Hendrik van den Broek (Netherlands), Brian Dean (Canada), Salvatore Masiello and Michele Caiazza (La Bottega del Mandolino) and Ferrara, Gabriele Pandini. In the United States, when the bowlback was being made in numbers, Lyon and Healy was a major manufacturer, especially under the "Washburn" brand. Other American manufacturers include Martin, Vega, and Larson Brothers. In Canada, Brian Dean has manufactured instruments in Neapolitan, Roman, German and American styles but is also known for his original 'Grand Concert' design created for American virtuoso Joseph Brent. German manufacturers include Albert & Mueller, Dietrich, Klaus Knorr, Reinhold Seiffert and Alfred Woll. The German bowlbacks use a style developed by Seiffert, with a larger and rounder body. Japanese brands include Kunishima and Suzuki. Other Japanese manufacturers include Oona, Kawada, Noguchi, Toichiro Ishikawa, Rokutaro Nakade, Otiai Tadao, Yoshihiko Takusari, Nokuti Makoto, Watanabe, Kanou Kadama and Ochiai. Other bowlback styles: Lombardic, Milanese, Cremonese, Brescian, Genoese Another family of bowlback mandolins came from Milan and Lombardy. These mandolins are closer to the mandolino or mandore than other modern mandolins. They are shorter and wider than the standard Neapolitan mandolin, with a shallow back. The instruments have 6 strings, 3 wire treble-strings and 3 gut or wire-wrapped-silk bass-strings. The strings ran between the tuning pegs and a bridge that was glued to the soundboard, as a guitar's. The Lombardic mandolins were tuned g–b–e′–a′–d″–g″ (shown in Helmholtz pitch notation). A developer of the Milanese style was Antonio Monzino (Milan) and his family who made them for 6 generations. Samuel Adelstein described the Lombardi mandolin in 1893 as wider and shorter than the Neapolitan mandolin, with a shallower back and a shorter and wider neck, with six single strings to the regular mandolin's set of 4. The Lombardi was tuned C–D–A–E–B–G. The strings were fastened to the bridge like a guitar's. There were 20 frets, covering three octaves, with an additional 5 notes. When Adelstein wrote, there were no nylon strings, and the gut and single strings "do not vibrate so clearly and sweetly as the double steel string of the Neapolitan." Brescian mandolin or Cremonese mandolin Brescian mandolins (also known as Cremonese) that have survived in museums have four gut strings instead of six and a fixed bridge. The mandolin was tuned in fifths, like the Neapolitan mandolin. In his 1805 mandolin method, Anweisung die Mandoline von selbst zu erlernen nebst einigen Uebungsstucken von Bortolazzi, Bartolomeo Bortolazzi popularised the Cremonese mandolin, which had four single-strings and a fixed bridge, to which the strings were attached. Bortolazzi said in this book that the new wire-strung mandolins were uncomfortable to play, when compared with the gut-string instruments. Also, he felt they had a "less pleasing...hard, zither-like tone" as compared to the gut string's "softer, full-singing tone." He favored the four single strings of the Cremonese instrument, which were tuned the same as the Neapolitan. Genoese mandolin, a blend of styles Like the Lombardy mandolin, the Genoese mandolin was not tuned in fifths. Its 6 gut strings (or 6 courses of strings) were tuned as a guitar but one octave higher: e-a-d’-g’-b natural-e”. Like the Neapolitan and unlike the Lombardy mandolin, the Genoese does not have the bridge glued to the soundboard, but holds the bridge on with downward tension, from strings that run between the bottom and neck of the instrument. The neck was wider than the Neapolitan mandolin's neck. The peg-head is similar to the guitar's. Archtop At the very end of the 19th century, a new style, with a carved top and back construction inspired by violin family instruments began to supplant the European-style bowl-back instruments in the United States. This new style is credited to mandolins designed and built by Orville Gibson, a Kalamazoo, Michigan, luthier who founded the "Gibson Mandolin-Guitar Manufacturing Co., Limited" in 1902. Gibson mandolins evolved into two basic styles: the Florentine or F-style, which has a decorative scroll near the neck, two points on the lower body and usually a scroll carved into the headstock; and the A-style, which is pear-shaped, has no points and usually has a simpler headstock. These styles generally have either two f-shaped soundholes like a violin (F-5 and A-5), or a single oval sound hole (F-4 and A-4 and lower models) directly under the strings. Much variation exists between makers working from these archetypes, and other variants have become increasingly common. Generally, in the United States, Gibson F-hole F-5 mandolins and mandolins influenced by that design are strongly associated with bluegrass, while the A-style is associated with other types of music, although it too is most often used for and associated with bluegrass. The F-5's more complicated woodwork also translates into a more expensive instrument. Internal bracing to support the top in the F-style mandolins is usually achieved with parallel tone bars, similar to the bass bar on a violin. Some makers instead employ "X-bracing", which is two tone-bars mortised together to form an X. Some luthiers now using a "modified x-bracing" that incorporates both a tone bar and X-bracing. Numerous modern mandolin makers build instruments that largely replicate the Gibson F-5 Artist models built in the early 1920s under the supervision of Gibson acoustician Lloyd Loar. Original Loar-signed instruments are sought after and extremely valuable. Other makers from the Loar period and earlier include Lyon and Healy, Vega and Larson Brothers. Pressed archtops The ideal for archtops has been solid pieces of wood carved into the right shape. However, another archtop exists, the top made of laminated wood or thin sheets of solid wood, pressed into the arched shape. These have become increasingly common in the world of internationally constructed musical instruments in the 21st century. The pressed-top instruments are made to appear the same as the carved-top instruments; however, the pressed tops do not sound the same as the carved-wood tops. Carved-wood tops when carved to the ideal thickness, produce the sound which consumers expect. Not carving them correctly can lead to a dull sound. The sound of a carved-wood instrument changes the longer it is played, and older instruments are sought out for their rich sound. Laminated-wood presstops are less resonant than carved wood, the wood and glue vibrating differently than wood grain. Presstops made of solid wood have the wood's natural grain compressed, creating a sound that is not as full as on a well-made, carved-top mandolin. Flatback Flatback mandolins use a thin sheet of wood with bracing for the back, as a guitar uses, rather than the bowl of the bowlback or the arched back of the carved mandolins. Like the bowlback, the flatback has a round sound hole. This has been sometimes modified to an elongated hole, called a D-hole. The body has a rounded almond shape with flat or sometimes canted soundboard. The type was developed in Europe in the 1850s. The French and Germans called it a Portuguese mandolin, although they also developed it locally. The Germans used it in Wandervogel. The bandolim is commonly used wherever the Spanish and Portuguese took it: in South America, in Brazil (Choro) and in the Philippines. In the early 1970s English luthier Stefan Sobell developed a large-bodied, flat-backed mandolin with a carved soundboard, based on his own cittern design; this is often called a 'Celtic' mandolin. American forms include the Army-Navy mandolin, the flatiron and the pancake mandolins. Tone The tone of the flatback is described as warm or mellow, suitable for folk music and smaller audiences. The instrument sound does not punch through the other players' sound like a carved top does. Double top, double back The double top is a feature that luthiers are experimenting with in the 21st century, to get better sound. However, mandolinists and luthiers have been experimenting with them since at least the early 1900s. Back in the early 1900s, mandolinist Ginislao Paris approached Luigi Embergher to build custom mandolins. The sticker inside one of the four surviving instruments indicates the build was called after him, the Sistema Ginislao Paris). Paris' round-back double-top mandolins use a false back below the soundboard to create a second hollow space within the instrument. Modern mandolinists such as Joseph Brent and Avi Avital use instruments customized, either by the luthier's choice or at the request of the player. Joseph Brent's mandolin, made by Brian Dean also uses what Brent calls a false back. Brent's mandolin was the luthier's solution to Brent's request for a loud mandolin in which the wood was clearly audible, with less metallic sound from the strings. The type used by Avital is variation of the flatback, with a double top that encloses a resonating chamber, sound holes on the side, and a convex back. It is made by one manufacturer in Israel, luthier Arik Kerman. Other players of Kerman mandolins include Alon Sariel, Jacob Reuven, and Tom Cohen. Others Mandolinetto Other American-made variants include the mandolinetto or Howe-Orme guitar-shaped mandolin (manufactured by the Elias Howe Company between 1897 and roughly 1920), which featured a cylindrical bulge along the top from fingerboard end to tailpiece and the Vega mando-lute (more commonly called a cylinder-back mandolin manufactured by the Vega Company between 1913 and roughly 1927), which had a similar longitudinal bulge but on the back rather than the front of the instrument. Mandolin-banjo An instrument with a mandolin neck paired with a banjo-style body was patented by Benjamin Bradbury of Brooklyn in 1882 and given the name banjolin by John Farris in 1885. Today banjolin is sometimes reserved to describe an instrument with four strings, while the version with the four courses of double strings is called a mandolin-banjo. Resonator mandolin A resonator mandolin or "resophonic mandolin" is a mandolin whose sound is produced by one or more metal cones (resonators) instead of the customary wooden soundboard (mandolin top/face). Historic brands include Dobro and National. Electric mandolin As with almost every other contemporary chordophone, another modern variant is the electric mandolin. These mandolins can have four or five individual or double courses of strings. They were developed in the early 1930s, contemporaneous with the development of the electric guitar. They come in solid body and acoustic electric forms. Specific instruments have been designed to overcome the mandolin's rapid decay with its plucked notes. Fender released a model in 1992 with an additional string (a high A, above the E string), a tremolo bridge and extra humbucker pickup (total of two). The result was an instrument capable of playing heavy metal style guitar riffs or violin-like passages with sustained notes that can be adjusted as with an electric guitar. Playing traditions worldwide The international repertoire of music for mandolin is almost unlimited, and musicians use it to play various types of music. This is especially true of violin music, since the mandolin has the same tuning as the violin. Following its invention and early development in Italy the mandolin spread throughout the European continent. The instrument was primarily used in a classical tradition with Mandolin orchestras, so-called Estudiantinas or in Germany Zupforchestern appearing in many cities. Following this continental popularity of the mandolin family local traditions appeared outside Europe in the Americas and in Japan. Travelling mandolin virtuosi like Carlo Curti, Giuseppe Pettine, Raffaele Calace and Silvio Ranieri contributed to the mandolin becoming a "fad" instrument in the early 20th century. This "mandolin craze" was fading |
of microphotonics relies on Fresnel reflection to guide the light. If the photons reside mainly in the higher index material, the confinement is due to total internal reflection. If the confinement is due many distributed Fresnel reflections, the device is termed a photonic crystal. There are many different types of geometries used in microphotonics including optical waveguides, optical microcavities, and Arrayed waveguide gratings. Photonic crystals Photonic crystals are non-conducting materials that reflect various wavelengths of light almost perfectly. Such a crystal can be referred to as a perfect mirror. Other devices employed in microphotonics include micromirrors and photonic wire waveguides. These tools are used to "mold the flow of light", a famous phrase for describing the goal of microphotonics. The crystals serve as structures that | confinement is due to total internal reflection. If the confinement is due many distributed Fresnel reflections, the device is termed a photonic crystal. There are many different types of geometries used in microphotonics including optical waveguides, optical microcavities, and Arrayed waveguide gratings. Photonic crystals Photonic crystals are non-conducting materials that reflect various wavelengths of light almost perfectly. Such a crystal can be referred to as a perfect mirror. Other devices employed in microphotonics include micromirrors and photonic wire waveguides. These tools are used to "mold the flow of light", a famous phrase for describing the goal of microphotonics. The crystals |
on products such as the Lisa and Macintosh (eventually settled in court in Microsoft's favor in 1993). On PCs, Windows is still the most popular operating system in all countries. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold. This comparison, however, may not be fully relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows (that are comparable to competitors) show one third market share, similar to that for end user use. , the most recent version of Windows for PCs and tablets is Windows 11, version 21H2. The most recent version for embedded devices is Windows 10, version 21H1. The most recent version for server computers is Windows Server 2022, version 21H2. A specialized version of Windows also runs on the Xbox One and Xbox Series X/S video game consoles. Genealogy By marketing role Microsoft, the developer of Windows, has registered several trademarks, each of which denotes a family of Windows operating systems that target a specific sector of the computing industry. As of 2014, the following Windows families were being actively developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released almost at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and tablets. The latest version is Windows 11. The main competitor of this family is macOS by Apple for personal computers and iPadOS and Android for tablets (c.f. ). Windows Server: The operating system for server computers. The latest version is Windows Server 2022. Unlike its client sibling, it has adopted a strong naming scheme. The main competitor of this family is Linux. (c.f. ) Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers (especially on many computers at once), recovery or troubleshooting purposes. The latest version is Windows PE 10. Windows IoT (previously Windows Embedded): Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. Eventually, however, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which also consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive. The following Windows families are no longer being developed: Windows 9x: An operating system that targeted the consumer market. Discontinued because of suboptimal performance. (PC World called its last version, Windows Me, one of the worst products of all time.) Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system. The first version was called Pocket PC 2000; the third version, Windows Mobile 2003 is the first version to adopt the Windows Mobile trademark. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones. The first version was Windows Phone 7, followed by Windows Phone 8, and Windows Phone 8.1. It was succeeded by Windows 10 Mobile, that is now also discontinued. Version history The term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are generally categorized as follows: Early versions The history of Windows dates back to 1981 when Microsoft started work on a program called "Interface Manager". It was announced in November 1983 (after the Apple Lisa, but before the Macintosh) under the name "Windows", but Windows 1.0 was not released until November 1985. Windows 1.0 was to compete with Apple's operating system, but achieved little popularity. Windows 1.0 is not a complete operating system; rather, it extends MS-DOS. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Calendar, Cardfile, Clipboard Viewer, Clock, Control Panel, Notepad, Paint, Reversi, Terminal and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, and was more popular than its predecessor. It features several improvements to the user interface and memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0 also introduced more sophisticated keyboard shortcuts and could make use of expanded memory. Windows 2.1 was released in two different versions: Windows/286 and Windows/386. Windows/386 uses the virtual 8086 mode of the Intel 80386 to multitask several DOS programs and the paged memory model to emulate expanded memory using available extended memory. Windows/286, in spite of its name, runs on both Intel 8086 and Intel 80286 processors. It runs in real mode but can make use of the high memory area. In addition to full Windows-packages, there were runtime-only versions that shipped with early Windows software from third parties and made it possible to run their Windows software on MS-DOS and without the full Windows feature set. The early versions of Windows are often thought of as graphical shells, mostly because they ran on top of MS-DOS and use it for file system services. However, even the earliest Windows versions already assumed many typical operating system functions; notably, having their own executable file format and providing their own device drivers (timer, graphics, printer, mouse, keyboard and sound). Unlike MS-DOS, Windows allowed users to execute multiple graphical applications at the same time, through cooperative multitasking. Windows implemented an elaborate, segment-based, software virtual memory scheme, which allows it to run applications larger than available memory: code segments and resources are swapped in and thrown away when memory became scarce; data segments moved in memory when a given application had relinquished processor control. Windows 3.x Windows 3.0, released in 1990, improved the design, mostly because of virtual memory and loadable virtual device drivers (VxDs) that allow Windows to share arbitrary devices between multi-tasked DOS applications. Windows 3.0 applications can run in protected mode, which gives them access to several megabytes of memory without the obligation to participate in the software virtual memory scheme. They run inside the same address space, where the segmented memory provides a degree of protection. Windows 3.0 also featured improvements to the user interface. Microsoft rewrote critical operations from C into assembly. Windows 3.0 is the first Microsoft Windows version to achieve broad commercial success, selling 2 million copies in the first six months. Windows 3.1, made generally available on March 1, 1992, featured a facelift. In August 1993, Windows for Workgroups, a special version with integrated peer-to-peer networking features and a version number of 3.11, was released. It was sold along with Windows 3.1. Support for Windows 3.1 ended on December 31, 2001. Windows 3.2, released 1994, is an updated version of the Chinese version of Windows 3.1. The update was limited to this language version, as it fixed only issues related to the complex writing system of the Chinese language. Windows 3.2 was generally sold by computer manufacturers with a ten-disk version of MS-DOS that also had Simplified Chinese characters in basic output and some translated utilities. Windows 9x The next major consumer-oriented release of Windows, Windows 95, was released on August 24, 1995. While still remaining MS-DOS-based, Windows 95 introduced support for native 32-bit applications, plug and play hardware, preemptive multitasking, long file names of up to 255 characters, and provided increased stability over its predecessors. Windows 95 also introduced a redesigned, object oriented user interface, replacing the previous Program Manager with the Start menu, taskbar, and Windows Explorer shell. Windows 95 was a major commercial success for Microsoft; Ina Fried of CNET remarked that "by the time Windows 95 was finally ushered off the market in 2001, it had become a fixture on computer desktops around the world." Microsoft published four OEM Service Releases (OSR) of Windows 95, each of which was roughly equivalent to a service pack. The first OSR of Windows 95 was also the first version of Windows to be bundled with Microsoft's web browser, Internet Explorer. Mainstream support for Windows 95 ended on December 31, 2000, and extended support for Windows 95 ended on December 31, 2001. Windows 95 was followed up with the release of Windows 98 on June 25, 1998, which introduced the Windows Driver Model, support for USB composite devices, support for ACPI, hibernation, and support for multi-monitor configurations. Windows 98 also included integration with Internet Explorer 4 through Active Desktop and other aspects of the Windows Desktop Update (a series of enhancements to the Explorer shell which were also made available for Windows 95). In May 1999, Microsoft released Windows 98 Second Edition, an updated version of Windows 98. Windows 98 SE added Internet Explorer 5.0 and Windows Media Player 6.2 amongst other upgrades. Mainstream support for Windows 98 ended on June 30, 2002, and extended support for Windows 98 ended on July 11, 2006. On September 14, 2000, Microsoft released Windows Me (Millennium Edition), the last DOS-based version of Windows. Windows Me incorporated visual interface enhancements from its Windows NT-based counterpart Windows 2000, had faster boot times than previous versions (which however, required the removal of the ability to access a real mode DOS environment, removing compatibility with some older programs), expanded multimedia functionality (including Windows Media Player 7, Windows Movie Maker, and the Windows Image Acquisition framework for retrieving images from scanners and digital cameras), additional system utilities such as System File Protection and System Restore, and updated home networking tools. However, Windows Me was faced with criticism for its speed and instability, along with hardware compatibility issues and its removal of real mode DOS support. PC World considered Windows Me to be one of the worst operating systems Microsoft had ever released, and the 4th worst tech product of all time. Windows NT Version history Early versions (Windows NT 3.1/3.5/3.51/4.0/2000) In November 1988, a new development team within Microsoft (which included former Digital Equipment Corporation developers Dave Cutler and Mark Lucovsky) began work on a revamped version of IBM and Microsoft's OS/2 operating system known as "NT OS/2". NT OS/2 was intended to be a secure, multi-user operating system with POSIX compatibility and a modular, portable kernel with preemptive multitasking and support for multiple processor architectures. However, following the successful release of Windows 3.0, the NT development team decided to rework the project to use an extended 32-bit port of the Windows API known as Win32 instead of those of OS/2. Win32 maintained a similar structure to the Windows APIs (allowing existing Windows applications to easily be ported to the platform), but also supported the capabilities of the existing NT kernel. Following its approval by Microsoft's staff, development continued on what was now Windows NT, the first 32-bit version of Windows. However, IBM objected to the changes, and ultimately continued OS/2 development on its own. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. The first release of the resulting operating system, Windows NT 3.1 (named to associate it with Windows 3.1) was released in July 1993, with versions for desktop workstations and servers. Windows NT 3.5 was released in September 1994, focusing on performance improvements and support for Novell's NetWare, and was followed up by Windows NT 3.51 in May 1995, which included additional improvements and support for the PowerPC architecture. Windows NT 4.0 was released in June 1996, introducing the redesigned interface of Windows 95 to the NT series. On February 17, 2000, Microsoft released Windows 2000, a successor to NT 4.0. The Windows NT name was dropped at this point in order to put a greater focus on the Windows brand. Windows XP The next major version of Windows NT, Windows XP, was released on October 25, 2001. The introduction of Windows XP aimed to unify the consumer-oriented Windows 9x series with the architecture introduced by Windows NT, a change which Microsoft promised would provide better performance over its DOS-based predecessors. Windows XP would also introduce a redesigned user interface (including an updated Start menu and a "task-oriented" Windows Explorer), streamlined multimedia and networking features, Internet Explorer 6, integration with Microsoft's .NET Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality. At retail, Windows XP was now marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014. After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in December 2005, by Windows Server 2003 R2. Windows Vista After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008. Windows 7 On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released as RTM (release to manufacturing) while the former was released to the public 3 months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements. Windows 8 and 8.1 Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024×768 pixels, effectively making it unfit for netbooks with 800×600-pixel screens. Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices | Passport services, a "compatibility mode" to help provide backwards compatibility with software designed for previous versions of Windows, and Remote Assistance functionality. At retail, Windows XP was now marketed in two main editions: the "Home" edition was targeted towards consumers, while the "Professional" edition was targeted towards business environments and power users, and included additional security and networking features. Home and Professional were later accompanied by the "Media Center" edition (designed for home theater PCs, with an emphasis on support for DVD playback, TV tuner cards, DVR functionality, and remote controls), and the "Tablet PC" edition (designed for mobile devices meeting its specifications for a tablet computer, with support for stylus pen input and additional pen-enabled applications). Mainstream support for Windows XP ended on April 14, 2009. Extended support ended on April 8, 2014. After Windows 2000, Microsoft also changed its release schedules for server operating systems; the server counterpart of Windows XP, Windows Server 2003, was released in April 2003. It was followed in December 2005, by Windows Server 2003 R2. Windows Vista After a lengthy development process, Windows Vista was released on November 30, 2006, for volume licensing and January 30, 2007, for consumers. It contained a number of new features, from a redesigned shell and user interface to significant technical changes, with a particular focus on security features. It was available in a number of different editions, and has been subject to some criticism, such as drop of performance, longer boot time, criticism of new UAC, and stricter license agreement. Vista's server counterpart, Windows Server 2008 was released in early 2008. Windows 7 On July 22, 2009, Windows 7 and Windows Server 2008 R2 were released as RTM (release to manufacturing) while the former was released to the public 3 months later on October 22, 2009. Unlike its predecessor, Windows Vista, which introduced a large number of new features, Windows 7 was intended to be a more focused, incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista was already compatible. Windows 7 has multi-touch support, a redesigned Windows shell with an updated taskbar with revealable jump lists that contain shortcuts to files frequently used with specific applications and shortcuts to tasks within the application, a home networking system called HomeGroup, and performance improvements. Windows 8 and 8.1 Windows 8, the successor to Windows 7, was released generally on October 26, 2012. A number of significant changes were made on Windows 8, including the introduction of a user interface based around Microsoft's Metro design language with optimizations for touch-based devices such as tablets and all-in-one PCs. These changes include the Start screen, which uses large tiles that are more convenient for touch interactions and allow for the display of continually updated information, and a new class of apps which are designed primarily for use on touch-based devices. The new Windows version required a minimum resolution of 1024×768 pixels, effectively making it unfit for netbooks with 800×600-pixel screens. Other changes include increased integration with cloud services and other online platforms (such as social networks and Microsoft's own OneDrive (formerly SkyDrive) and Xbox Live services), the Windows Store service for software distribution, and a new variant known as Windows RT for use on devices that utilize the ARM architecture, and a new keyboard shortcut for screenshots. An update to Windows 8, called Windows 8.1, was released on October 17, 2013, and includes features such as new live tile sizes, deeper OneDrive integration, and many other revisions. Windows 8 and Windows 8.1 have been subject to some criticism, such as removal of the Start menu. Windows 10 On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7). In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300 gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day. In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will be October 14, 2025. Windows 11 On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. Windows 11 is a free upgrade to some Windows 10 users as of now. Windows 365 In July 2021, Microsoft announced it will start selling subscriptions to virtualized Windows desktops as part of a new Windows 365 service in the following month. It is not a standalone version of Microsoft Windows, but a web service that provides access to Windows 10 and Windows 11 built on top of Azure Virtual Desktop. The new service will allow for cross-platform usage, aiming to make the operating system available for both Apple and Android users. The subscription-based service will be accessible through any operating system with a web browser. Microsoft has stated that the new service is an attempt at capitalizing on the growing trend, fostered during the COVID-19 pandemic, for businesses to adopt a hybrid work environment, in which "employees split their time between the office and home" according to vice president Jared Spataro. As the service will be accessible through web-browsers, Microsoft will be able to bypass the need to publish the service through Google Play or the Apple App Store. Microsoft announced Windows 365 availability to business and enterprise customers on August 2, 2021. Multilingual support Multilingual support has been built into Windows since Windows 3.0. The language for both the keyboard and the interface can be changed through the Region and Language Control Panel. Components for all supported input languages, such as Input Method Editors, are automatically installed during Windows installation (in Windows XP and earlier, files for East Asian languages, such as Chinese, and right-to-left scripts, such as Arabic, may need to be installed separately, also from the said Control Panel). Third-party IMEs may also be installed if a user feels that the provided one is insufficient for their needs. Interface languages for the operating system are free for download, but some languages are limited to certain editions of Windows. Language Interface Packs (LIPs) are redistributable and may be downloaded from Microsoft's Download Center and installed for any edition of Windows (XP or later) they translate most, but not all, of the Windows interface, and require a certain base language (the language which Windows originally shipped with). This is used for most languages in emerging markets. Full Language Packs, which translates the complete operating system, are only available for specific editions of Windows (Ultimate and Enterprise editions of Windows Vista and 7, and all editions of Windows 8, 8.1 and RT except Single Language). They do not require a specific base language, and are commonly used for more popular languages such as French or Chinese. These languages cannot be downloaded through the Download Center, but available as optional updates through the Windows Update service (except Windows 8). The interface language of installed applications is not affected by changes in the Windows interface language. The availability of languages depends on the application developers themselves. Windows 8 and Windows Server 2012 introduces a new Language Control Panel where both the interface and input languages can be simultaneously changed, and language packs, regardless of type, can be downloaded from a central location. The PC Settings app in Windows 8.1 and Windows Server 2012 R2 also includes a counterpart settings page for this. Changing the interface language also changes the language of preinstalled Windows Store apps (such as Mail, Maps and News) and certain other Microsoft-developed apps (such as Remote Desktop). The above limitations for language packs are however still in effect, except that full language packs can be installed for any edition except Single Language, which caters to emerging markets. Platform support Windows NT included support for several platforms before the x86-based personal computer became dominant in the professional world. Windows NT 4.0 and its predecessors supported PowerPC, DEC Alpha and MIPS R4000 (although some of the platforms implement 64-bit computing, the OS treated them as 32-bit). Windows 2000 dropped support for all platforms, except the third generation x86 (known as IA-32) or newer in 32-bit mode. The client line of Windows NT family still runs on IA-32 but the Windows Server line ceased supporting this platform with the release of Windows Server 2008 R2. With the introduction of the Intel Itanium architecture (IA-64), Microsoft released new versions of Windows to support it. Itanium versions of Windows XP and Windows Server 2003 were released at the same time as their mainstream x86 counterparts. Windows XP 64-Bit Edition, released in 2005, is the last Windows client operating systems to support Itanium. Windows Server line continues to support this platform until Windows Server 2012; Windows Server 2008 R2 is the last Windows operating system to support Itanium architecture. On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003 x64 Editions to support x86-64 (or simply x64), the 64-bit version of x86 architecture. Windows Vista was the first client version of Windows NT to be released simultaneously in IA-32 and x64 editions. x64 is still supported. An edition of Windows 8 known as Windows RT was specifically created for computers with ARM architecture and while ARM is still used for Windows smartphones with Windows 10, tablets with Windows RT will not be updated. Starting from Windows 10 Fall Creators Update (version 1709) and later includes support for PCs with ARM architecture. Windows 11 is the first version to drop support for 32-bit hardware. Windows CE Windows CE (officially known as Windows Embedded Compact), is an edition of Windows that runs on minimalistic computers, like satellite navigation systems and some mobile phones. Windows Embedded Compact is based on its own dedicated kernel, dubbed Windows CE kernel. Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, while Windows CE provides the technical foundation to do so. Windows CE was used in the Dreamcast along with Sega's own proprietary OS for the console. Windows CE was the core from which Windows Mobile was derived. Its successor, Windows Phone 7, was based on components from both Windows CE 6.0 R3 and Windows CE 7.0. Windows Phone 8 however, is based on the same NT-kernel as Windows 8. Windows Embedded Compact is not to be confused with Windows XP Embedded or Windows NT 4.0 Embedded, modular editions of Windows based on Windows NT kernel. Xbox OS Xbox OS is an unofficial name given to the version of Windows that runs on Xbox consoles. From Xbox One onwards it is an implementation with an emphasis on virtualization (using Hyper-V) as it is three operating systems running at once, consisting of the core operating system, a second implemented for games and a more Windows-like environment for applications. Microsoft updates Xbox One's OS every month, and these updates can be downloaded from the Xbox Live service to the Xbox and subsequently installed, or by using offline recovery images downloaded via a PC. It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as "Windows 10 on Xbox One" or "OneCore". Xbox One and Xbox Series operating systems also allow limited (due to licensing restrictions and testing resources) backward compatibility with previous generation hardware, and the Xbox 360's system is backwards compatible with the original Xbox. Version control system Before 2017 Microsoft has used a proprietary SourceDepot Version Control system which couldn't keep up with size of Windows. Microsoft had begun to integrate Git into Team Foundation Server in 2013, but Windows continued to rely on Source Depot. The Windows code was divided among 65 different repositories with a kind of virtualization layer to produce unified view of all of the code. In 2017 Microsoft announced that it would start using Git, an open source version control system created by Linus Torvalds and in May 2017 they reported that has completed migration into the Git repository. VFSForGit Because of its large, decades-long history, however, the Windows codebase is not especially well suited to the decentralized nature of Linux development that Git was originally created to manage. Each Git repository contains a complete history of all the files, which proved unworkable for Windows developers because cloning the whole repository takes several hours. Microsoft has been working on a new project called the Virtual File System for Git (VFSForGit) to address these challenges. In 2021 the VFS for Git has been superseded by Scalar. Timeline of releases Usage share and device sales Use of the latest version Windows 10 has exceeded Windows 7 globally since early 2018. For desktop and laptop computers, according to Net Applications and StatCounter, which track the use of operating systems in devices that are active on the Web, Windows was the most used operating-system family in August 2021, with around 91% usage share according to Net Applications and around 76% usage share according to StatCounter. Including personal computers of all kinds (e.g., desktops, laptops, mobile devices, and game consoles), Windows OSes accounted for 32.67% of usage share in August 2021, compared to Android (highest, at 46.03%), iOS's 13.76%, iPadOS's 2.81%, and macOS's 2.51%, according to Net Applications and 30.73% of usage share in August 2021, compared to Android (highest, at 42.56%), iOS/iPadOS's 16.53%, and macOS's 6.51%, according to StatCounter. Those statistics do not include servers (including so-called cloud |
in enslaved and free African American communities. Enslaved and free black root workers created mojo bags and placed Bible verses, petition papers, roots, herbs, animal parts, graveyard dirt, and other ingredients to conjure a negative or positive effect. They used either Christian or Islamic prayers to spirituality charge the mojo bag. During slavery, many of the mojo bags created were for protection against a harsh slaveholder. The petition papers placed inside a mojo bag can have either a Bible verse, a Quranic verse, symbols, and other characters to conjure a positive or negative magical result. In the United States, enslaved African Americans called mojo bags "voodoo bags." After the Civil War, mojo bags were created in Black American communities for protection from law enforcement, to attract love, protection, money, employment, or to communicate with spirits. Folklorist Newbell Niles Puckett documented a mojo practice of an African-American cook in the Mississippi Delta. The African-American cook had a mojo bag with a "lizard's tail, rabbit's foot, a fish eye, snake skins, a beetle, and a dime with a hole in it." This mojo bag was worn by the cook for good-luck. Other conjure bundles in the hoodoo tradition are hanged on the side of the door or beds where people sleep to protect from conjure. Traditionally, a client consulted with a root worker to know what kind of mojo he or she needed as not all mojos are the same, as one mojo can not work for everyone. Each person needs a different mojo. In traditional Hoodoo, if there are several people needing love, the root worker or conjurer created different mojos for each of their clients. One mojo created the same can not work for everyone. By the twentieth century, Hoodoo was culturally appropriated by outsiders to African-American culture to make a profit. Spiritual shops began to sell the same mojo for everyone. In traditional Hoodoo, certain songs, prayers, symbols, and ingredients are used to conjure or manifest results. However, when Hoodoo was appropriated by white spiritual merchants, the same mojo was sold to consumers. Maintenance Fixing and feeding a mojo hand There is a process to fixing a proper mojo. A ritual must be put in place in order to successfully prepare a mojo by being filled and awakened to life. This can be done by smoking incense and candles, or it may be breathed upon to bring it to life. Prayers may be said, and other methods may be used to accomplish this essential step. Once prepared, the mojo is "dressed" or "fed" with a liquid such as alcohol, perfume, water, or bodily fluids. The reason it is said to feed the mojo to keep it working is that it is alive with spirit. One story from the work entitled From My People describes a slave who went out and sought a mojo conjurer that gave him a mojo to run away from home. The story describes the slave's mojo as fixing him into many formations, and he ultimately dies because he misuses its power. Had he fixed and believed in the specific mojo himself, he might have escaped the plantation alive. Hiding the mojo Mojos are traditionally made for an individual and so must be concealed on the person at all times. Men usually keep the trinkets hidden in the pants pocket, while women are more prone to clip it to the bra. They are also commonly pinned to clothes below the waist. Depending on the type of mojo, the hiding place will be crucial to its success, as those who make conjure bags to carry love spells sometimes specify that the mojo must be worn next to the skin. A story from the book From My People described the story of Moses and the task he went through to bring his people out of slavery. It described how "Hoodoo Lost his Hand", as Moses's mojo was hidden through his staff. When he turned it into a snake, the pharaoh made his soothsayers and magicians create the same effect. As a result, the Pharaoh's snake was killed by Moses's snake, and that is how Hoodoo lost his hand. Slave Narratives In the 1930s, the Federal Writers' Project part of the Works Progress Administration during the Great Depression, provided jobs for unemployed writers to write and collect the experiences of former slaves. Writers, black and white, documented the experiences of the last generation of African Americans born into slavery. Former African American slaves told writers about their slave experience which provided readers a glimpse into the lives of the enslaved. Slave narratives revealed the culture of African Americans during slavery. African American former slaves talked about conjure, rootwork, and Hoodoo. These narratives revealed how enslaved | similar to the mojo bags in Hoodoo. In addition, archeologists in New York discovered continued West-Central African burial practices in a section of Lower Manhattan, New York City which is now the location of the African Burial Ground National Monument. Historians and archeologists found Kongo related artifacts at the African Burial Ground such as minkisi and nkisi conjure bundles buried with African remains. These nkisi and minkisi bundles became the conjure bags in Hoodoo. According to scholars, the origin of the word hoodoo and other words associated with the practice were traced to the Bight of Benin and Senegambia. For example, in West Africa the word gris-gris (a conjure bag) is a Mande word. The word wanga (another word for mojo bag) comes from the Kikongo language. The word mojo comes from the West African word mojuba. The most common synonym for the word mojo is gris-gris, which literally means "fetish" or "charm"; thus a gris-gris bag is a charm bag. In the Caribbean, an almost identical African-derived bag is called a wanga or oanga bag, but that term is uncommon in the United States. The word conjure is an ancient alternative to "hoodoo", which is a direct variation of African-American folklore. Because of this, a conjure hand is also considered a hoodoo bag, usually made by a respected community conjure doctor. The word mojo also originated from the Kikongo word mooyo. The word mooyo means that natural ingredients have their own indwelling spirit that can be utilized in mojo bags to bring luck and protection. The word hand in this context is defined as a combination of ingredients. The term may derive from the use of finger and hand bones from the dead in mojo bags, or from ingredients such as the lucky hand root (favored by gamblers). The latter suggests an analogy between the varied bag ingredients and the several cards that make up a hand in card games. Mojo reaches as far back as West African culture, where it is said to drive away evil spirits, keep good luck in the household, manipulate a fortune, and lure and persuade lovers. The ideology of the ancestors and the descendants of the mojo hand used this "prayer in a bag" based on their belief of spiritual inheritance, by which the omniscient forefathers of their families would provide protection and favor, especially when they used the mojo. Through this, a strong belief was placed in the idealism of whomever used mojo, creating a spiritual trust in the magic itself. Making a Mojo Most Southern-style conjure bags are made of red flannel material. The use of red flannel bags for mojo bags was influenced by the Bakongo people's minkisi in Central Africa, and in Hoodoo red symbolizes protection from evil and spiritual power. Other times when red cloth was not available, African Americans used whatever cloth they had to create a conjure bag. The contents of each bag vary directly with the aim of the conjurer. For example, a mojo carried for love-drawing will contain different ingredients than one for gambling luck or magical protection. Ingredients can include graveyard dirt, roots, herbs, animal parts, minerals, coins, crystals, good luck tokens, and carved amulets. The more personalized objects are used to add extra power because of their symbolic value. A former slave from Texas said to make a conjure bag African-Americans "would take hair and brass nails and thimbles and needles and mix them up in a conjure bag." Prince Johnson a former slave from Mississippi said his slaveholder would inspect her slaves to make sure they did not have any charms underneath their clothes. Some mojo bags were made to cause harm and bad luck for slaveholders, and other mojo bags were for protection depending on the ingredients used by the root worker. William Webb made mojo bags for |
was finally asked to give up the "music" column. Among the fans of the column was Harry Shearer, who would later become a voice on The Simpsons. Life in Hell became popular almost immediately. In November 1984, Deborah Caplan, Groening's then-girlfriend and co-worker at the Reader, offered to publish "Love is Hell", a series of relationship-themed Life in Hell strips, in book form. Released a month later, the book was an underground success, selling 22,000 copies in its first two printings. Work is Hell soon followed, also published by Caplan. Soon afterward, Caplan and Groening left and put together the Life in Hell Co., which handled merchandising for Life in Hell. Groening also started Acme Features Syndicate, which initially syndicated Life in Hell as well as work by Lynda Barry and John Callahan, but would eventually only syndicate Life in Hell. At the end of its run, Life in Hell was carried in 250 weekly newspapers and has been anthologized in a series of books, including School is Hell, Childhood is Hell, The Big Book of Hell, and The Huge Book of Hell. Although Groening previously stated, "I'll never give up the comic strip. It's my foundation," the June 16, 2012 strip marked Life in Hells conclusion. After Groening ended the strip, the Center for Cartoon Studies commissioned a poster that was presented to Groening in honor of his work. The poster contained tribute cartoons by 22 of Groening's cartoonist friends who were influenced by Life in Hell. The Simpsons Creation Life in Hell caught the attention of Hollywood writer-producer and Gracie Films founder James L. Brooks, who had been shown the strip by fellow producer Polly Platt. In 1985, Brooks contacted Groening with the proposition of working in animation on an undefined future project, which would turn out to be developing a series of short animated skits, called "bumpers," for the Fox variety show The Tracey Ullman Show. Originally, Brooks wanted Groening to adapt his Life in Hell characters for the show. Groening feared that he would have to give up his ownership rights, and that the show would fail and would take down his comic strip with it. Groening conceived of the idea for the Simpsons in the lobby of James L. Brooks's office and hurriedly sketched out his version of a dysfunctional family: Homer, the overweight father; Marge, the slim mother; Bart, the bratty oldest child; Lisa, the intelligent middle child; and Maggie, the baby. Groening famously named the main Simpson characters after members of his own family: his parents, Homer and Marge (Margaret or Marjorie in full), and his younger sisters, Lisa and Margaret (Maggie). Claiming that it was a bit too obvious to name a character after himself, he chose the name "Bart," an anagram of brat. However, he stresses that aside from some of the sibling rivalry, his family is nothing like the Simpsons. Groening also has an older brother and sister, Mark and Patty, and in a 1995 interview Groening divulged that Mark "is the actual inspiration for Bart." Maggie Groening has co-written a few Simpsons books featuring her cartoon namesake. The Tracey Ullman Show The family was crudely drawn, because Groening had submitted basic sketches to the animators, assuming they would clean them up; instead, they just traced over his drawings. The entire Simpson family was designed so that they would be recognizable in silhouette. When Groening originally designed Homer, he put his own initials into the character's hairline and ear: the hairline resembled an 'M', and the right ear resembled a 'G'. Groening decided that this would be too distracting though, and redesigned the ear to look normal. He still draws the ear as a 'G' when he draws pictures of Homer for fans. Marge's distinct beehive hairstyle was inspired by Bride of Frankenstein and the style that Margaret Groening wore during the 1960s, although her hair was never blue. Bart's original design, which appeared in the first shorts, had spikier hair, and the spikes were of different lengths. The number was later limited to nine spikes, all of the same size. At the time Groening was primarily drawing in black and "not thinking that [Bart] would eventually be drawn in color" gave him spikes that appear to be an extension of his head. Lisa's physical features are generally not used in other characters; for example, in the later seasons, no character other than Maggie shares her hairline. While designing Lisa, Groening "couldn't be bothered to even think about girls' hair styles". When designing Lisa and Maggie, he "just gave them this kind of spiky starfish hair style, not thinking that they would eventually be drawn in color". Groening storyboarded and scripted every short (now known as The Simpsons shorts), which were then animated by a team including David Silverman and Wes Archer, both of whom would later become directors on the series. The Simpsons shorts first appeared in The Tracey Ullman Show on April 19, 1987. Another family member, Grampa Simpson, was introduced in the later shorts. Years later, during the early seasons of The Simpsons, when it came time to give Grampa a first name, Groening says he refused to name him after his own grandfather, Abraham Groening, leaving it to other writers to choose a name. By coincidence, they chose "Abraham", unaware that it was the name of Groening's grandfather. Half-hour Although The Tracey Ullman Show was not a big hit, the popularity of the shorts led to a half-hour spin-off in 1989. A team of production companies adapted The Simpsons into a half-hour series for the Fox Broadcasting Company. The team included what is now the Klasky Csupo animation house. James L. Brooks negotiated a provision in the contract with the Fox network that prevented Fox from interfering with the show's content. Groening said his goal in creating the show was to offer the audience an alternative to what he called "the mainstream trash" that they were watching. The half-hour series premiered on December 17, 1989 with "Simpsons Roasting on an Open Fire", a Christmas special. "Some Enchanted Evening" was the first full-length episode produced, but it did not broadcast until May 1990, as the last episode of the first season, because of animation problems. The series quickly became a worldwide phenomenon, to the surprise of many. Groening said: "Nobody thought The Simpsons was going to be a big hit. It sneaked up on everybody." The Simpsons was co-developed by Groening, Brooks, and Sam Simon, a writer-producer with whom Brooks had worked on previous projects. Groening and Simon, however, did not get along and were often in conflict over the show; Groening once described their relationship as "very contentious." Simon eventually left the show in 1993 over creative differences. Like the main family members, several characters from the show have names that were inspired by people, locations or films. The name "Wiggum" for police chief Chief Wiggum is Groening's mother's maiden name. The names of a few other characters were taken from major street names in Groening's hometown of Portland, Oregon, including Flanders, Lovejoy, Powell, Quimby and Kearney. Despite common fan belief that Sideshow Bob Terwilliger was named after SW Terwilliger Boulevard in Portland, he was actually named after the character Dr. Terwilliker from the film The 5,000 Fingers of Dr. T. Although Groening has pitched a number of spin-offs from The Simpsons, those attempts have been unsuccessful. In 1994, Groening and other Simpsons producers pitched a live-action spin-off about Krusty the Clown (with Dan Castellaneta playing the lead role), but were unsuccessful in getting it off the ground. Groening has also pitched "Young Homer" and a spin-off about the non-Simpsons citizens of Springfield. In 1995, Groening got into a major disagreement with Brooks and other Simpsons producers over "A Star Is Burns", a crossover episode with The Critic, an animated show also produced by Brooks and staffed with many former Simpsons crew members. Groening claimed that he feared viewers would "see it as nothing but a pathetic attempt to advertise The Critic at the expense of The Simpsons," and was concerned about the possible implication that he had created or produced The Critic. He requested his name be taken off the episode. Groening is credited with writing or co-writing the episodes "Some Enchanted Evening", "The Telltale Head", "Colonel Homer" and "22 Short Films About Springfield", as well as The Simpsons Movie, released in 2007. He has had several cameo appearances in the show, with a speaking role in the episode "My Big Fat Geek Wedding". He currently serves at The Simpsons as an executive producer and creative consultant. Futurama After spending a few years researching science fiction, Groening got together with Simpsons writer/producer David X. Cohen (known as David S. Cohen at the time) in 1997 and developed Futurama, an animated series about life in the year 3000. By the time they pitched the series to Fox in April 1998, Groening and Cohen had composed many characters and storylines; Groening claimed they had gone "overboard" in their discussions. Groening described trying to get the show on the air as "by far the worst experience of [his] grown-up life." The show premiered on March 28, 1999. Groening's writing credits for the show are for the premiere episode, "Space Pilot 3000" (co-written with Cohen), "Rebirth" (story) and "In-A-Gadda-Da-Leela" (story). After four years on the air, the show was canceled by Fox. In a situation similar to Family Guy, however, strong DVD sales and very stable ratings on Adult Swim brought Futurama back to life. When Comedy Central began negotiating for the rights to air Futurama reruns, Fox suggested that there was a possibility of also creating new episodes. When Comedy Central committed to sixteen new episodes, it was decided that four straight-to-DVD films – Bender's Big Score (2007), The Beast with a Billion Backs (2008), Bender's Game (2008) and Into the Wild Green Yonder (2009) – would be produced. Since no new Futurama projects were in production, the movie Into the Wild Green Yonder was designed to stand as the Futurama series finale. However, Groening had expressed a desire to continue the Futurama franchise in some form, including as a theatrical film. In an interview with CNN, Groening said that "we have a great relationship with Comedy Central and we would love to do more episodes for them, but I don't know... We're having discussions and there is some enthusiasm but I can't tell if it's just me". Comedy Central commissioned an additional 26 new episodes, and began airing them in 2010. The show continued in to 2013, before Comedy Central announced in April 2013 that they would not be renewing it beyond its seventh season. The final episode aired on September 4, 2013. On February 9, 2022, the series was revived at Hulu, set for a 2023 release. Disenchantment On January 15, 2016, it was announced that Groening was in talks with Netflix to develop a new animated series. On July 25, 2017 the series, Disenchantment, was ordered by Netflix. The first ten episodes | in Hell as well as work by Lynda Barry and John Callahan, but would eventually only syndicate Life in Hell. At the end of its run, Life in Hell was carried in 250 weekly newspapers and has been anthologized in a series of books, including School is Hell, Childhood is Hell, The Big Book of Hell, and The Huge Book of Hell. Although Groening previously stated, "I'll never give up the comic strip. It's my foundation," the June 16, 2012 strip marked Life in Hells conclusion. After Groening ended the strip, the Center for Cartoon Studies commissioned a poster that was presented to Groening in honor of his work. The poster contained tribute cartoons by 22 of Groening's cartoonist friends who were influenced by Life in Hell. The Simpsons Creation Life in Hell caught the attention of Hollywood writer-producer and Gracie Films founder James L. Brooks, who had been shown the strip by fellow producer Polly Platt. In 1985, Brooks contacted Groening with the proposition of working in animation on an undefined future project, which would turn out to be developing a series of short animated skits, called "bumpers," for the Fox variety show The Tracey Ullman Show. Originally, Brooks wanted Groening to adapt his Life in Hell characters for the show. Groening feared that he would have to give up his ownership rights, and that the show would fail and would take down his comic strip with it. Groening conceived of the idea for the Simpsons in the lobby of James L. Brooks's office and hurriedly sketched out his version of a dysfunctional family: Homer, the overweight father; Marge, the slim mother; Bart, the bratty oldest child; Lisa, the intelligent middle child; and Maggie, the baby. Groening famously named the main Simpson characters after members of his own family: his parents, Homer and Marge (Margaret or Marjorie in full), and his younger sisters, Lisa and Margaret (Maggie). Claiming that it was a bit too obvious to name a character after himself, he chose the name "Bart," an anagram of brat. However, he stresses that aside from some of the sibling rivalry, his family is nothing like the Simpsons. Groening also has an older brother and sister, Mark and Patty, and in a 1995 interview Groening divulged that Mark "is the actual inspiration for Bart." Maggie Groening has co-written a few Simpsons books featuring her cartoon namesake. The Tracey Ullman Show The family was crudely drawn, because Groening had submitted basic sketches to the animators, assuming they would clean them up; instead, they just traced over his drawings. The entire Simpson family was designed so that they would be recognizable in silhouette. When Groening originally designed Homer, he put his own initials into the character's hairline and ear: the hairline resembled an 'M', and the right ear resembled a 'G'. Groening decided that this would be too distracting though, and redesigned the ear to look normal. He still draws the ear as a 'G' when he draws pictures of Homer for fans. Marge's distinct beehive hairstyle was inspired by Bride of Frankenstein and the style that Margaret Groening wore during the 1960s, although her hair was never blue. Bart's original design, which appeared in the first shorts, had spikier hair, and the spikes were of different lengths. The number was later limited to nine spikes, all of the same size. At the time Groening was primarily drawing in black and "not thinking that [Bart] would eventually be drawn in color" gave him spikes that appear to be an extension of his head. Lisa's physical features are generally not used in other characters; for example, in the later seasons, no character other than Maggie shares her hairline. While designing Lisa, Groening "couldn't be bothered to even think about girls' hair styles". When designing Lisa and Maggie, he "just gave them this kind of spiky starfish hair style, not thinking that they would eventually be drawn in color". Groening storyboarded and scripted every short (now known as The Simpsons shorts), which were then animated by a team including David Silverman and Wes Archer, both of whom would later become directors on the series. The Simpsons shorts first appeared in The Tracey Ullman Show on April 19, 1987. Another family member, Grampa Simpson, was introduced in the later shorts. Years later, during the early seasons of The Simpsons, when it came time to give Grampa a first name, Groening says he refused to name him after his own grandfather, Abraham Groening, leaving it to other writers to choose a name. By coincidence, they chose "Abraham", unaware that it was the name of Groening's grandfather. Half-hour Although The Tracey Ullman Show was not a big hit, the popularity of the shorts led to a half-hour spin-off in 1989. A team of production companies adapted The Simpsons into a half-hour series for the Fox Broadcasting Company. The team included what is now the Klasky Csupo animation house. James L. Brooks negotiated a provision in the contract with the Fox network that prevented Fox from interfering with the show's content. Groening said his goal in creating the show was to offer the audience an alternative to what he called "the mainstream trash" that they were watching. The half-hour series premiered on December 17, 1989 with "Simpsons Roasting on an Open Fire", a Christmas special. "Some Enchanted Evening" was the first full-length episode produced, but it did not broadcast until May 1990, as the last episode of the first season, because of animation problems. The series quickly became a worldwide phenomenon, to the surprise of many. Groening said: "Nobody thought The Simpsons was going to be a big hit. It sneaked up on everybody." The Simpsons was co-developed by Groening, Brooks, and Sam Simon, a writer-producer with whom Brooks had worked on previous projects. Groening and Simon, however, did not get along and were often in conflict over the show; Groening once described their relationship as "very contentious." Simon eventually left the show in 1993 over creative differences. Like the main family members, several characters from the show have names that were inspired by people, locations or films. The name "Wiggum" for police chief Chief Wiggum is Groening's mother's maiden name. The names of a few other characters were taken from major street names in Groening's hometown of Portland, Oregon, including Flanders, Lovejoy, Powell, Quimby and Kearney. Despite common fan belief that Sideshow Bob Terwilliger was named after SW Terwilliger Boulevard in Portland, he was actually named after the character Dr. Terwilliker from the film The 5,000 Fingers of Dr. T. Although Groening has pitched a number of spin-offs from The Simpsons, those attempts have been unsuccessful. In 1994, Groening and other Simpsons producers pitched a live-action spin-off about Krusty the Clown (with Dan Castellaneta playing the lead role), but were unsuccessful in getting it off the ground. Groening has also pitched "Young Homer" and a spin-off about the non-Simpsons citizens of Springfield. In 1995, Groening got into a major disagreement with Brooks and other Simpsons producers over "A Star Is Burns", a crossover episode with The Critic, an animated show also produced by Brooks and staffed with many former Simpsons crew members. Groening claimed that he feared viewers would "see it as nothing but a pathetic attempt to advertise The Critic at the expense of The Simpsons," and was concerned about the possible implication that he had created or produced The Critic. He requested his name be taken off the episode. Groening is credited with |
identity and change, space and time, causality, necessity, and possibility. It includes questions about the nature of consciousness and the relationship between mind and matter, between substance and attribute, and between potentiality and actuality. The word "metaphysics" comes from two Greek words that, together, literally mean "after or behind or among [the study of] the natural". It has been suggested that the term might have been coined by a first century CE editor who assembled various small selections of Aristotle's works into the treatise we now know by the name Metaphysics (μετὰ τὰ φυσικά, meta ta physika, 'after the Physics ', another of Aristotle's works). Metaphysics studies questions related to what it is for something to exist and what types of existence there are. Metaphysics seeks to answer, in an abstract and fully general manner, the questions: What ? What is it ? Topics of metaphysical investigation include existence, objects and their properties, space and time, cause and effect, and possibility. Metaphysics is considered one of the four main branches of philosophy, along with epistemology, logic, and ethics. Epistemological foundation Metaphysical study is conducted using deduction from that which is known a priori. Like foundational mathematics (which is sometimes considered a special case of metaphysics applied to the existence of number), it tries to give a coherent account of the structure of the world, capable of explaining our everyday and scientific perception of the world, and being free from contradictions. In mathematics, there are many different ways to define numbers; similarly, in metaphysics, there are many different ways to define objects, properties, concepts, and other entities that are claimed to make up the world. While metaphysics may, as a special case, study the entities postulated by fundamental science such as atoms and superstrings, its core topic is the set of categories such as object, property and causality which those scientific theories assume. For example: claiming that "electrons have charge" is a scientific theory; while exploring what it means for electrons to be (or at least, to be perceived as) "objects", charge to be a "property", and for both to exist in a topological entity called "space" is the task of metaphysics. There are two broad stances about what is "the world" studied by metaphysics. According to metaphysical realism, the objects studied by metaphysics exist independently of any observer so that the subject is the most fundamental of all sciences. Metaphysical anti-realism, on the other hand, assumes that the objects studied by metaphysics exist inside the mind of an observer, so the subject becomes a form of introspection and conceptual analysis. This position is of more recent origin. Some philosophers, notably Kant, discuss both of these "worlds" and what can be inferred about each one. Some, such as the logical positivists, and many scientists, reject the metaphysical realism as meaningless and unverifiable. Others reply that this criticism also applies to any type of knowledge, including hard science, which claims to describe anything other than the contents of human perception, and thus that the world of perception is the objective world in some sense. Metaphysics itself usually assumes that some stance has been taken on these questions and that it may proceed independently of the choice—the question of which stance to take belongs instead to another branch of philosophy, epistemology. Central questions Ontology (being) Ontology is the branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being. It has been characterized as general metaphysics in contrast to special metaphysics, which is concerned with more particular aspects of being. Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that provides an encompassing classification of all entities. Commonly proposed categories include substances, properties, relations, states of affairs and events. These categories are characterized by fundamental ontological concepts, like particularity and universality, abstractness and concreteness or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities. Identity and change Identity is a fundamental metaphysical concern. Metaphysicians investigating identity are tasked with the question of what, exactly, it means for something to be identical to itself, or – more controversially – to something else. Issues of identity arise in the context of time: what does it mean for something to be itself across two moments in time? How do we account for this? Another question of identity arises when we ask what our criteria ought to be for determining identity, and how the reality of identity interfaces with linguistic expressions. The metaphysical positions one takes on identity have far-reaching implications on issues such as the Mind–body problem, personal identity, ethics, and law. A few ancient Greeks took extreme positions on the nature of change. Parmenides denied change altogether, while Heraclitus argued that change was ubiquitous: "No man ever steps in the same river twice." Identity, sometimes called numerical identity, is the relation that a thing bears to itself, and which no thing bears to anything other than itself (cf. sameness). A modern philosopher who made a lasting impact on the philosophy of identity was Leibniz, whose Law of the Indiscernibility of Identicals is still widely accepted today. It states that if some object x is identical to some object y, then any property that x has, y will have as well. Put formally, it states However, it does seem that objects can change over time. If one were to look at a tree one day, and the tree later lost a leaf, it would seem that one could still be looking at that same tree. Two rival theories to account for the relationship between change and identity are perdurantism, which treats the tree as a series of tree-stages, and endurantism, which maintains that the organism—the same tree—is present at every stage in its history. By appealing to intrinsic and extrinsic properties, endurantism finds a way to harmonize identity with change. Endurantists believe that objects persist by being strictly numerically identical over time. However, if Leibniz's Law of the Indiscernibility of Identicals is utilized to define numerical identity here, it seems that objects must be completely unchanged in order to persist. Discriminating between intrinsic properties and extrinsic properties, endurantists state that numerical identity means that, if some object x is identical to some object y, then any intrinsic property that x has, y will have as well. Thus, if an object persists, intrinsic properties of it are unchanged, but extrinsic properties can change over time. Besides the object itself, environments and other objects can change over time; properties that relate to other objects would change even if this object does not change. Perdurantism can harmonize identity with change in another way. In four-dimensionalism, a version of perdurantism, what persists is a four-dimensional object which does not change although three-dimensional slices of the object may differ. Space and time Objects appear to us in space and time, while abstract entities such as classes, properties, and relations do not. How do space and time serve this function as a ground for objects? Are space and time entities themselves, of some form? Must they exist prior to objects? How exactly can they be defined? How is time related to change; must there always be something changing in order for time to exist? Causality Classical philosophy recognized a number of causes, including teleological future causes. In special relativity and quantum field theory the notions of space, time and causality become tangled together, with temporal orders of causations becoming dependent on who is observing them. The laws of physics are symmetrical in time, so could equally well be used to describe time as running backwards. Why then do we perceive it as flowing in one direction, the arrow of time, and as containing causation flowing in the same direction? For that matter, can an effect precede its cause? This was the title of a 1954 paper by Michael Dummett, which sparked a discussion that continues today. Earlier, in 1947, C. S. Lewis had argued that one can meaningfully pray concerning the outcome of, e.g., a medical test while recognizing that the outcome is determined by past events: "My free act contributes to the cosmic shape." Likewise, some interpretations of quantum mechanics, dating to 1945, involve backward-in-time causal influences. Causality is linked by many philosophers to the concept of counterfactuals. To say that A caused B means that if A had not happened then B would not have happened. This view was advanced by David Lewis in his 1973 paper "Causation". His subsequent papers further develop his theory of causation. Causality is usually required as a foundation for philosophy of science if science aims to understand causes and effects and make predictions about them. Necessity and possibility Metaphysicians investigate questions about the ways the world could have been. David Lewis, in On the Plurality of Worlds, endorsed a view called concrete modal realism, according to which facts about how things could have been are made true by other concrete worlds in which things are different. Other philosophers, including Gottfried Leibniz, have dealt with the idea of possible worlds as well. A necessary fact is true across all possible worlds. A possible fact is true in some possible world, even if not in the actual world. For example, it is possible that cats could have had two tails, or that any particular apple could have not existed. By contrast, certain propositions seem necessarily true, such as analytic propositions, e.g., "All bachelors are unmarried." The view that any analytic truth is necessary is not universally held among philosophers. A less controversial view is that self-identity is necessary, as it seems fundamentally incoherent to claim that any x is not identical to itself; this is known as the law of identity, a putative "first principle". Similarly, Aristotle describes the principle of non-contradiction: It is impossible that the same quality should both belong and not belong to the same thing ... This is the most certain of all principles ... Wherefore they who demonstrate refer to this as an ultimate opinion. For it is by nature the source of all the other axioms. Peripheral questions Metaphysical cosmology and cosmogony Metaphysical cosmology is the branch of metaphysics that deals with the world as the totality of all phenomena in space and time. Historically, it formed a major part of the subject alongside Ontology, though its role is more peripheral in contemporary philosophy. It has had a broad scope, and in many cases was founded in religion. The ancient Greeks drew no distinction between this use and their model for the cosmos. However, in modern times it addresses questions about the Universe which are beyond the scope of the physical sciences. It is distinguished from religious cosmology in that it approaches these questions using philosophical methods (e.g. dialectics). Cosmogony deals specifically with the origin of the universe. Modern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind. Substance dualism is a classical theory in which mind and body are essentially different, with the mind having some of the attributes traditionally assigned to the soul, and which creates an immediate conceptual puzzle about how the two interact. This form of substance dualism differs from the dualism of some eastern philosophical traditions (like Nyāya), which also posit a soul; for the soul, under their view, is ontologically distinct from the mind. Idealism postulates that material objects do not exist unless perceived and only as perceptions. Adherents of panpsychism, a kind of property dualism, hold that everything has a mental aspect, but not that everything exists in a mind. Neutral monism postulates that existence consists of a single substance that in itself is neither mental nor physical, but is capable of mental and physical aspects or attributesthus it implies a dual-aspect theory. For the last century, the dominant theories have been science-inspired including materialistic monism, type identity theory, token identity theory, functionalism, reductive physicalism, nonreductive physicalism, eliminative materialism, anomalous monism, property dualism, epiphenomenalism and emergence. Determinism and free will Determinism is the philosophical proposition that every event, including human cognition, decision and action, is causally determined by an unbroken chain of prior occurrences. It holds that nothing happens that has not already been determined. The principal consequence of the deterministic claim is that it poses a challenge to the existence of free will. The problem of free will is the problem of whether rational agents exercise control over their own actions and decisions. Addressing this problem requires understanding the relation between freedom and causation, and determining whether the laws of nature are causally deterministic. Some philosophers, known as incompatibilists, view determinism and free will as mutually exclusive. If they believe in determinism, they will therefore believe free will to be an illusion, a position known as Hard Determinism. Proponents range from Baruch Spinoza to Ted Honderich. Henri Bergson defended free will in his dissertation Time and Free Will from 1889. Others, labeled compatibilists (or "soft determinists"), believe that the two ideas can be reconciled coherently. Adherents of this view include Thomas Hobbes and many modern philosophers such as John Martin Fischer, Gary Watson, Harry Frankfurt, and the like. Incompatibilists who accept free will but reject determinism are called libertarians, a term not to be confused with the political sense. Robert Kane and Alvin Plantinga are modern defenders of this theory. Natural and social kinds The earliest type of classification of social construction traces back to Plato in his dialogue Phaedrus where he claims that the biological classification system seems to carve nature at the joints. In contrast, later philosophers such as Michel Foucault and Jorge Luis Borges have challenged the capacity of natural and social classification. In his essay The Analytical Language of John Wilkins, Borges makes us imagine a certain encyclopedia where the animals are divided into (a) those that belong to the emperor; (b) embalmed ones; (c) those that are trained;... and so forth, in order to bring forward the ambiguity of natural and social kinds. According to metaphysics author Alyssa Ney: "the reason all this is interesting is that there seems to be a metaphysical difference between the Borgesian system and Plato's". The difference is not obvious but one classification attempts to carve entities up according to objective distinction while the other does not. According to Quine this notion is closely related to the notion of similarity. Number There are different ways to set up the notion of number in metaphysics theories. Platonist theories postulate number as a fundamental category itself. Others consider it to be a property of an entity called a "group" comprising other entities; or to be a relation held between several groups of entities, such as "the number four is the set of all sets of four things". Many of the debates around universals are applied to the study of number, and are of particular importance due to its status as a foundation for the philosophy of mathematics and for mathematics itself. Applied metaphysics Although metaphysics as a philosophical enterprise is highly hypothetical, it also has practical application in most other branches of philosophy, science, and now also information technology. Such areas generally assume some basic ontology (such as a system of objects, properties, classes, and space-time) as well as other metaphysical stances on topics such as causality and agency, then build their own particular theories upon these. In science, for example, some theories are based on the ontological assumption of objects with properties (such as electrons having charge) while others may reject objects completely (such as quantum field theories, where spread-out "electronness" becomes property of space-time rather than an object). "Social" branches of philosophy such as philosophy of morality, aesthetics and philosophy of religion (which in turn give rise to practical subjects such as ethics, politics, law, and art) all require metaphysical foundations, which may be considered as branches or applications of metaphysics. For example, they may postulate the existence of basic entities such as value, beauty, and God. Then they use these postulates to make their own arguments about consequences resulting from them. When philosophers in these subjects make their foundations they are doing applied metaphysics, and may draw upon its core topics and methods to guide them, including ontology and other core and peripheral topics. As in science, the foundations chosen will in turn depend on the underlying ontology used, so philosophers in these subjects may have to dig right down to the ontological layer of metaphysics to find what is possible for their theories. For example, a contradiction obtained in a theory of God or Beauty might be due to an assumption that it is an object rather than some other kind of ontological entity. Relation to other disciplines Science Prior to the modern history of science, scientific questions were addressed as a part of natural philosophy. Originally, the term "science" () simply meant "knowledge". The scientific method, however, transformed natural philosophy into an empirical activity deriving from experiment, unlike the rest of philosophy. By the end of the 18th century, it had begun to be called "science" to distinguish it from other branches of philosophy. Science and philosophy have been considered separated disciplines ever since. Thereafter, metaphysics denoted philosophical enquiry of a non-empirical character into the nature of existence. Metaphysics continues asking "why" where science leaves off. For example, any theory of fundamental physics is based on some set of axioms, which may postulate the existence of entities such as atoms, particles, forces, charges, mass, or fields. Stating such postulates is considered to be the "end" of a science theory. Metaphysics takes these postulates and explores what they mean as human concepts. For example, do all theories of physics require the existence of space and time, objects, and properties? Or can they be expressed using only objects, or only properties? Do the objects have to retain their identity over time or can they change? If they change, then are they still the same object? Can theories be reformulated by converting properties or predicates (such as "red") into entities (such as redness or redness fields) or processes ('there is some redding happening over there' appears in some human languages in place of the use of properties). Is the distinction between objects and properties fundamental to the physical world or to our perception of it? Much recent work has been devoted to analyzing the role of metaphysics in scientific theorizing. Alexandre Koyré led this movement, declaring in his book Metaphysics and Measurement, "It is not by following experiment, but by outstripping experiment, that the scientific mind makes progress." That metaphysical propositions can influence scientific theorizing is John Watkins' most lasting contribution to philosophy. Since 1957 "he showed the ways in which some un-testable and hence, according to Popperian ideas, non-empirical propositions can nevertheless be influential in the development of properly testable and hence scientific theories. These profound results in applied elementary logic...represented an important corrective to positivist teachings about the meaninglessness of metaphysics and of normative claims". Imre Lakatos maintained that all scientific theories have a metaphysical "hard core" essential for the generation of hypotheses and theoretical assumptions. Thus, according to Lakatos, "scientific changes are connected with vast cataclysmic metaphysical revolutions." An example from biology of Lakatos' thesis: David Hull has argued that changes in the ontological status of the species concept have been central in the development of biological thought from Aristotle through Cuvier, Lamarck, and Darwin. Darwin's ignorance of metaphysics made it more difficult for him to respond to his critics because he could not readily grasp the ways in which their underlying metaphysical views differed from his own. In physics, new metaphysical ideas have arisen in connection with quantum mechanics, where subatomic particles arguably do not have the same sort of individuality as the particulars with which philosophy has traditionally been concerned. Also, adherence to a deterministic metaphysics in the face of the challenge posed by the quantum-mechanical uncertainty principle led physicists such as Albert Einstein to propose alternative theories that retained determinism. A.N. Whitehead is famous for creating a process philosophy metaphysics inspired by electromagnetism and special relativity. In chemistry, Gilbert Newton Lewis addressed the nature of motion, arguing that an electron should not be said to move when it has none of the properties of motion. Katherine Hawley notes that the metaphysics even of a widely accepted scientific theory may be challenged if it can be argued that the metaphysical presuppositions of the theory make no contribution to its predictive success. Theology There is a relationship between theological doctrines and philosophical reflection in the philosophy of a religion (such as Christian philosophy), philosophical reflections are strictly rational. On this way of seeing the two disciplines, if at least one of the premises of an argument is derived from revelation, the argument falls in the domain of theology; otherwise it falls into philosophy's domain. Rejections of metaphysics Meta-metaphysics is the branch of philosophy that is concerned with the foundations of metaphysics. A number of individuals have suggested that much or all of metaphysics should be rejected, a meta-metaphysical position known as metaphysical deflationism or ontological deflationism. In the 16th century, Francis Bacon rejected scholastic metaphysics, and argued strongly for what is now called empiricism, being seen later as the father of modern empirical science. In the 18th century, David Hume took a strong position, arguing that all genuine knowledge involves either mathematics or matters of fact and that metaphysics, which goes beyond these, is worthless. He concludes his Enquiry Concerning Human Understanding (1748) with the statement: If we take in our hand any volume [book]; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion. Thirty-three years after Hume's Enquiry appeared, Immanuel Kant published his Critique of Pure Reason. Although he followed Hume in rejecting much of previous metaphysics, he argued that there was still room for some synthetic a priori knowledge, concerned with matters of fact yet obtainable independent of experience. These included fundamental structures of space, time, and causality. He also argued for the freedom of the will and the existence of "things in themselves", the ultimate (but unknowable) objects of experience. Wittgenstein introduced the concept that metaphysics could be influenced by theories of aesthetics, via logic, vis. a world composed of "atomical facts". In the 1930s, A.J. Ayer and Rudolf Carnap endorsed Hume's position; Carnap quoted the passage above. They argued that metaphysical statements are neither true nor false but meaningless since, according to their verifiability theory of meaning, a statement is meaningful only if there can be empirical evidence for or against it. Thus, while Ayer rejected the monism of Spinoza, he avoided a commitment to pluralism, the contrary position, by holding both views to be without meaning. Carnap took a similar line with the controversy over the reality of the external world. While the logical positivism movement is now considered dead (with Ayer, a major proponent, admitting in a 1979 TV interview that "nearly all of it was false"), it has continued to influence philosophy development. Arguing against such rejections, the Scholastic philosopher Edward Feser held that Hume's critique of metaphysics, and specifically Hume's fork, is "notoriously self-refuting". Feser argues that Hume's fork itself is not a conceptual truth and is not empirically testable. Some living philosophers, such as Amie Thomasson, have argued that many metaphysical questions can be dissolved just by looking at the way we use words; others, such as Ted Sider, have argued that metaphysical questions are substantive, and that we can make progress toward answering them by comparing theories according to a range of theoretical virtues inspired by the sciences, such as simplicity and explanatory power. Etymology The word "metaphysics" derives from the Greek words μετά (metá, "after") and φυσικά (physiká, "physics"). It was first | such as the logical positivists, and many scientists, reject the metaphysical realism as meaningless and unverifiable. Others reply that this criticism also applies to any type of knowledge, including hard science, which claims to describe anything other than the contents of human perception, and thus that the world of perception is the objective world in some sense. Metaphysics itself usually assumes that some stance has been taken on these questions and that it may proceed independently of the choice—the question of which stance to take belongs instead to another branch of philosophy, epistemology. Central questions Ontology (being) Ontology is the branch of philosophy that studies concepts such as existence, being, becoming, and reality. It includes the questions of how entities are grouped into basic categories and which of these entities exist on the most fundamental level. Ontology is sometimes referred to as the science of being. It has been characterized as general metaphysics in contrast to special metaphysics, which is concerned with more particular aspects of being. Ontologists often try to determine what the categories or highest kinds are and how they form a system of categories that provides an encompassing classification of all entities. Commonly proposed categories include substances, properties, relations, states of affairs and events. These categories are characterized by fundamental ontological concepts, like particularity and universality, abstractness and concreteness or possibility and necessity. Of special interest is the concept of ontological dependence, which determines whether the entities of a category exist on the most fundamental level. Disagreements within ontology are often about whether entities belonging to a certain category exist and, if so, how they are related to other entities. Identity and change Identity is a fundamental metaphysical concern. Metaphysicians investigating identity are tasked with the question of what, exactly, it means for something to be identical to itself, or – more controversially – to something else. Issues of identity arise in the context of time: what does it mean for something to be itself across two moments in time? How do we account for this? Another question of identity arises when we ask what our criteria ought to be for determining identity, and how the reality of identity interfaces with linguistic expressions. The metaphysical positions one takes on identity have far-reaching implications on issues such as the Mind–body problem, personal identity, ethics, and law. A few ancient Greeks took extreme positions on the nature of change. Parmenides denied change altogether, while Heraclitus argued that change was ubiquitous: "No man ever steps in the same river twice." Identity, sometimes called numerical identity, is the relation that a thing bears to itself, and which no thing bears to anything other than itself (cf. sameness). A modern philosopher who made a lasting impact on the philosophy of identity was Leibniz, whose Law of the Indiscernibility of Identicals is still widely accepted today. It states that if some object x is identical to some object y, then any property that x has, y will have as well. Put formally, it states However, it does seem that objects can change over time. If one were to look at a tree one day, and the tree later lost a leaf, it would seem that one could still be looking at that same tree. Two rival theories to account for the relationship between change and identity are perdurantism, which treats the tree as a series of tree-stages, and endurantism, which maintains that the organism—the same tree—is present at every stage in its history. By appealing to intrinsic and extrinsic properties, endurantism finds a way to harmonize identity with change. Endurantists believe that objects persist by being strictly numerically identical over time. However, if Leibniz's Law of the Indiscernibility of Identicals is utilized to define numerical identity here, it seems that objects must be completely unchanged in order to persist. Discriminating between intrinsic properties and extrinsic properties, endurantists state that numerical identity means that, if some object x is identical to some object y, then any intrinsic property that x has, y will have as well. Thus, if an object persists, intrinsic properties of it are unchanged, but extrinsic properties can change over time. Besides the object itself, environments and other objects can change over time; properties that relate to other objects would change even if this object does not change. Perdurantism can harmonize identity with change in another way. In four-dimensionalism, a version of perdurantism, what persists is a four-dimensional object which does not change although three-dimensional slices of the object may differ. Space and time Objects appear to us in space and time, while abstract entities such as classes, properties, and relations do not. How do space and time serve this function as a ground for objects? Are space and time entities themselves, of some form? Must they exist prior to objects? How exactly can they be defined? How is time related to change; must there always be something changing in order for time to exist? Causality Classical philosophy recognized a number of causes, including teleological future causes. In special relativity and quantum field theory the notions of space, time and causality become tangled together, with temporal orders of causations becoming dependent on who is observing them. The laws of physics are symmetrical in time, so could equally well be used to describe time as running backwards. Why then do we perceive it as flowing in one direction, the arrow of time, and as containing causation flowing in the same direction? For that matter, can an effect precede its cause? This was the title of a 1954 paper by Michael Dummett, which sparked a discussion that continues today. Earlier, in 1947, C. S. Lewis had argued that one can meaningfully pray concerning the outcome of, e.g., a medical test while recognizing that the outcome is determined by past events: "My free act contributes to the cosmic shape." Likewise, some interpretations of quantum mechanics, dating to 1945, involve backward-in-time causal influences. Causality is linked by many philosophers to the concept of counterfactuals. To say that A caused B means that if A had not happened then B would not have happened. This view was advanced by David Lewis in his 1973 paper "Causation". His subsequent papers further develop his theory of causation. Causality is usually required as a foundation for philosophy of science if science aims to understand causes and effects and make predictions about them. Necessity and possibility Metaphysicians investigate questions about the ways the world could have been. David Lewis, in On the Plurality of Worlds, endorsed a view called concrete modal realism, according to which facts about how things could have been are made true by other concrete worlds in which things are different. Other philosophers, including Gottfried Leibniz, have dealt with the idea of possible worlds as well. A necessary fact is true across all possible worlds. A possible fact is true in some possible world, even if not in the actual world. For example, it is possible that cats could have had two tails, or that any particular apple could have not existed. By contrast, certain propositions seem necessarily true, such as analytic propositions, e.g., "All bachelors are unmarried." The view that any analytic truth is necessary is not universally held among philosophers. A less controversial view is that self-identity is necessary, as it seems fundamentally incoherent to claim that any x is not identical to itself; this is known as the law of identity, a putative "first principle". Similarly, Aristotle describes the principle of non-contradiction: It is impossible that the same quality should both belong and not belong to the same thing ... This is the most certain of all principles ... Wherefore they who demonstrate refer to this as an ultimate opinion. For it is by nature the source of all the other axioms. Peripheral questions Metaphysical cosmology and cosmogony Metaphysical cosmology is the branch of metaphysics that deals with the world as the totality of all phenomena in space and time. Historically, it formed a major part of the subject alongside Ontology, though its role is more peripheral in contemporary philosophy. It has had a broad scope, and in many cases was founded in religion. The ancient Greeks drew no distinction between this use and their model for the cosmos. However, in modern times it addresses questions about the Universe which are beyond the scope of the physical sciences. It is distinguished from religious cosmology in that it approaches these questions using philosophical methods (e.g. dialectics). Cosmogony deals specifically with the origin of the universe. Modern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind. Substance dualism is a classical theory in which mind and body are essentially different, with the mind having some of the attributes traditionally assigned to the soul, and which creates an immediate conceptual puzzle about how the two interact. This form of substance dualism differs from the dualism of some eastern philosophical traditions (like Nyāya), which also posit a soul; for the soul, under their view, is ontologically distinct from the mind. Idealism postulates that material objects do not exist unless perceived and only as perceptions. Adherents of panpsychism, a kind of property dualism, hold that everything has a mental aspect, but not that everything exists in a mind. Neutral monism postulates that existence consists of a single substance that in itself is neither mental nor physical, but is capable of mental and physical aspects or attributesthus it implies a dual-aspect theory. For the last century, the dominant theories have been science-inspired including materialistic monism, type identity theory, token identity theory, functionalism, reductive physicalism, nonreductive physicalism, eliminative materialism, anomalous monism, property dualism, epiphenomenalism and emergence. Determinism and free will Determinism is the philosophical proposition that every event, including human cognition, decision and action, is causally determined by an unbroken chain of prior occurrences. It holds that nothing happens that has not already been determined. The principal consequence of the deterministic claim is that it poses a challenge to the existence of free will. The problem of free will is the problem of whether rational agents exercise control over their own actions and decisions. Addressing this problem requires understanding the relation between freedom and causation, and determining whether the laws of nature are causally deterministic. Some philosophers, known as incompatibilists, view determinism and free will as mutually exclusive. If they believe in determinism, they will therefore believe free will to be an illusion, a position known as Hard Determinism. Proponents range from Baruch Spinoza to Ted Honderich. Henri Bergson defended free will in his dissertation Time and Free Will from 1889. Others, labeled compatibilists (or "soft determinists"), believe that the two ideas can be reconciled coherently. Adherents of this view include Thomas Hobbes and many modern philosophers such as John Martin Fischer, Gary Watson, Harry Frankfurt, and the like. Incompatibilists who accept free will but reject determinism are called libertarians, a term not to be confused with the political sense. Robert Kane and Alvin Plantinga are modern defenders of this theory. Natural and social kinds The earliest type of classification of social construction traces back to Plato in his dialogue Phaedrus where he claims that the biological classification system seems to carve nature at the joints. In contrast, later philosophers such as Michel Foucault and Jorge Luis Borges have challenged the capacity of natural and social classification. In his essay The Analytical Language of John Wilkins, Borges makes us imagine a certain encyclopedia where the animals are divided into (a) those that belong to the emperor; (b) embalmed ones; (c) those that are trained;... and so forth, in order to bring forward the ambiguity of natural and social kinds. According to metaphysics author Alyssa Ney: "the reason all this is interesting is that there seems to be a metaphysical difference between the Borgesian system and Plato's". The difference is not obvious but one classification attempts to carve entities up according to objective distinction while the other does not. According to Quine this notion is closely related to the notion of similarity. Number There are different ways to set up the notion of number in metaphysics theories. Platonist theories postulate number as a fundamental category itself. Others consider it to be a property of an entity called a "group" comprising other entities; or to be a relation held between several groups of entities, such as "the number four is the set of all sets of four things". Many of the debates around universals are applied to the study of number, and are of particular importance due to its status as a foundation for the philosophy of mathematics and for mathematics itself. Applied metaphysics Although metaphysics as a philosophical enterprise is highly hypothetical, it also has practical application in most other branches of philosophy, science, and now also information technology. Such areas generally assume some basic ontology (such as a system of objects, properties, classes, and space-time) as well as other metaphysical stances on topics such as causality and agency, then build their own particular theories upon these. In science, for example, some theories are based on the ontological assumption of objects with properties (such as electrons having charge) while others may reject objects completely (such as quantum field theories, where spread-out "electronness" becomes property of space-time rather than an object). "Social" branches of philosophy such as philosophy of morality, aesthetics and philosophy of religion (which in turn give rise to practical subjects such as ethics, politics, law, and art) all require metaphysical foundations, which may be considered as branches or applications of metaphysics. For example, they may postulate the existence of basic entities such as value, beauty, and God. Then they use these postulates to make their own arguments about consequences resulting from them. When philosophers in these subjects make their foundations they are doing applied metaphysics, and may draw upon its core topics and methods to guide them, including ontology and other core and peripheral topics. As in science, the foundations chosen will in turn depend on the underlying ontology used, so philosophers in these subjects may have to dig right down to the ontological layer of metaphysics to find what is possible for their theories. For example, a contradiction obtained in a theory of God or Beauty might be due to an assumption that it is an object rather than some other kind of ontological entity. Relation to other disciplines Science Prior to the modern history of science, scientific questions were addressed as a part of natural philosophy. Originally, the term "science" () simply meant "knowledge". The scientific method, however, transformed natural philosophy into an empirical activity deriving from experiment, unlike the rest of philosophy. By the end of the 18th century, it had begun to be called "science" to distinguish it from other branches of philosophy. Science and philosophy have been considered separated disciplines ever since. Thereafter, metaphysics denoted philosophical enquiry of a non-empirical character into the nature of existence. Metaphysics continues asking "why" where science leaves off. For example, any theory of fundamental physics is based on some set of axioms, which may postulate the existence of entities such as atoms, particles, forces, charges, mass, or fields. Stating such postulates is considered to be the "end" of a science theory. Metaphysics takes these postulates and explores what they mean as human concepts. For example, do all theories of physics require the existence of space and time, objects, and properties? Or can they be expressed using only objects, or only properties? Do the objects have to retain their identity over time or can they change? If they change, then are they still the same object? Can theories be reformulated by converting properties or predicates (such as "red") into entities (such as redness or redness fields) or processes ('there is some redding happening over there' appears in some human languages in place of the use of properties). Is the distinction between objects and properties fundamental to the physical world or to our perception of it? Much recent work has been devoted to analyzing the role of metaphysics in scientific theorizing. Alexandre Koyré led this movement, declaring in his book Metaphysics and Measurement, "It is not by following experiment, but by outstripping experiment, that the scientific mind makes progress." That metaphysical propositions can influence scientific theorizing is John Watkins' most lasting contribution to philosophy. Since 1957 "he showed the ways in which some un-testable and hence, according to Popperian ideas, non-empirical propositions can nevertheless be influential in the development of properly testable and hence scientific theories. These profound results in applied elementary logic...represented an important corrective to positivist teachings about the meaninglessness of metaphysics and of normative claims". Imre Lakatos maintained that all scientific theories have a metaphysical "hard core" essential for the generation of hypotheses and theoretical assumptions. Thus, according to Lakatos, "scientific changes are connected with vast cataclysmic metaphysical revolutions." An example from biology of Lakatos' thesis: David Hull has argued that changes in the ontological status of the species concept have been central in the development of biological thought from Aristotle through Cuvier, Lamarck, and Darwin. Darwin's ignorance of metaphysics made it more difficult for him to respond to his critics because he could not readily grasp the ways in which their underlying metaphysical views differed from his own. In physics, new metaphysical ideas have arisen in connection with quantum mechanics, where subatomic particles arguably do not have the same sort of individuality as the particulars with which philosophy has traditionally been concerned. Also, adherence to a deterministic metaphysics in the face of the challenge posed by the quantum-mechanical uncertainty principle led physicists such as Albert Einstein to propose alternative theories that retained determinism. A.N. Whitehead is famous for creating a process philosophy metaphysics inspired by electromagnetism and special relativity. In chemistry, Gilbert Newton Lewis addressed the nature of motion, arguing that an electron should not be said to move when it has none of the properties of motion. Katherine Hawley notes that the metaphysics even of a widely accepted scientific theory may be challenged if it can be argued that the metaphysical presuppositions of the theory make no contribution to its predictive success. Theology There is a relationship between theological doctrines and philosophical reflection in the philosophy of a religion (such as Christian philosophy), philosophical reflections are strictly rational. On this way of seeing the two disciplines, if at least one of the premises of an argument is derived from revelation, the argument falls in the domain of theology; otherwise it falls into philosophy's domain. Rejections of metaphysics Meta-metaphysics is the branch of philosophy that is concerned with the foundations of metaphysics. A number of individuals have suggested that much or all of metaphysics should be rejected, a meta-metaphysical position known as metaphysical deflationism or ontological deflationism. In the 16th century, Francis Bacon rejected scholastic metaphysics, and argued strongly for what is now called empiricism, being seen later as the father of modern empirical science. In the 18th century, David Hume took a strong position, arguing that all genuine knowledge involves either mathematics or matters of fact and that metaphysics, which goes beyond these, is worthless. He concludes his Enquiry Concerning Human Understanding (1748) with the statement: If we take in our hand any volume [book]; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion. Thirty-three years after Hume's Enquiry appeared, Immanuel Kant published his Critique of Pure Reason. Although he followed Hume in rejecting much of previous metaphysics, he argued that there was still room for some synthetic a priori knowledge, concerned with matters of fact yet obtainable independent of experience. These included fundamental structures of space, time, and causality. He also argued for the freedom of the will and the existence of "things in themselves", the ultimate (but unknowable) objects of experience. Wittgenstein introduced the concept that metaphysics could be influenced by theories of aesthetics, via logic, vis. a world composed of "atomical facts". In the 1930s, A.J. Ayer and Rudolf Carnap endorsed Hume's position; Carnap quoted the passage above. They argued that metaphysical statements are neither true nor false but meaningless since, according to their verifiability theory of meaning, a statement is meaningful only if there can be empirical evidence for or against it. Thus, while Ayer rejected the monism of Spinoza, he avoided a commitment to pluralism, the contrary position, by holding both views to be without meaning. Carnap took a similar line with the controversy over the reality of the external world. While the logical positivism movement is now considered dead (with Ayer, a major proponent, admitting in a 1979 TV interview that "nearly all of it was false"), it has continued to influence philosophy development. Arguing against such rejections, the Scholastic philosopher Edward Feser held that Hume's critique of metaphysics, and specifically Hume's fork, is "notoriously self-refuting". Feser argues that Hume's fork itself is not a conceptual truth and is not empirically testable. Some living philosophers, such as Amie Thomasson, have argued that many metaphysical questions can be dissolved just by looking at the way we use words; others, such as Ted Sider, have argued that metaphysical questions are substantive, and that we can make progress toward answering them by comparing theories according to a range of theoretical virtues inspired by the sciences, such as simplicity and explanatory power. Etymology The word "metaphysics" derives from the Greek words μετά (metá, "after") and φυσικά (physiká, "physics"). It was first used as the title for several of Aristotle's works, because they were usually anthologized after the works on physics in complete editions. The |
hardships that followed. United States "Shuttle gap" Under the Bush administration, the Constellation program included plans for retiring the Space Shuttle program and replacing it with the capability for spaceflight beyond low Earth orbit. In the 2011 United States federal budget, the Obama administration canceled Constellation for being over budget and behind schedule while not innovating and investing in critical new technologies. As part of the Artemis program, NASA is developing the Orion spacecraft to be launched by the Space Launch System. Under the Commercial Crew Development plan, NASA relies on transportation services provided by the private sector to reach low Earth orbit, such as SpaceX Dragon 2, the Boeing Starliner or Sierra Nevada Corporation's Dream Chaser. The period between the retirement of the Space Shuttle in 2011 and the first launch into space of SpaceShipTwo Flight VP-03 on 13 December 2018 is similar to the gap between the end of Apollo in 1975 and the first Space Shuttle flight in 1981, and is referred to by a presidential Blue Ribbon Committee as the U.S. human spaceflight gap. Commercial private spaceflight Since the early 2000s, a variety of private spaceflight ventures have been undertaken. As of May 2021, SpaceX has launched humans to orbit, while Virgin Galactic has launched crew to a height above on a suborbital trajectory. Several other companies—including Blue Origin and Sierra Nevada—develop crewed spacecraft. All four companies plan to fly commercial passengers in the emerging space tourism market. SpaceX has developed Crew Dragon flying on Falcon 9. It first launched astronauts to orbit and to the ISS in May 2020 as part of the Demo-2 mission. Developed as part of NASA's Commercial Crew Development program, the capsule is also available for flights with other customers. A first tourist mission, Inspiration4, launched in September 2021. Boeing is developing the Starliner capsule as part of NASA's Commercial Crew Development program, which is launched on a United Launch Alliance Atlas V launch vehicle. Starliner made an uncrewed flight in December 2019. A second uncrewed flight attempt was scrubbed in August 2021, with a NASA official saying it would likely not launch until 2022. A crewed flight is not expected before the second half of 2022. Similar to SpaceX, development funding has been provided by a mix of government and private funds. Virgin Galactic is developing SpaceshipTwo, a commercial suborbital spacecraft aimed at the space tourism market. It reached space in December 2018. Blue Origin is in a multi-year test program of their New Shepard vehicle and has carried out 16 uncrewed test flights as of September 2021, and one crewed flight carrying founder Jeff Bezos, his brother Mark Bezos, aviator Wally Funk, and 18-year old Oliver Daemen on July 20, 2021. Passenger travel via spacecraft Over the decades, a number of spacecraft have been proposed for spaceliner passenger travel. Somewhat analogous to travel by airliner after the middle of the 20th century, these vehicles are proposed to transport large numbers of passengers to destinations in space, or on Earth via suborbital spaceflights. To date, none of these concepts have been built, although a few vehicles that carry fewer than 10 persons are currently in the test flight phase of their development process. One large spaceliner concept currently in early development is the SpaceX Starship, which, in addition to replacing the Falcon 9 and Falcon Heavy launch vehicles in the legacy Earth-orbit market after 2020, has been proposed by SpaceX for long-distance commercial travel on Earth, flying 100+ people suborbitally between two points in under one hour, also known as "Earth-to-Earth". Small spaceplane or small capsule suborbital spacecraft have been under development for the past decade or so; , at least one of each type is under development. Both Virgin Galactic and Blue Origin have craft in active development: the SpaceShipTwo spaceplane and the New Shepard capsule, respectively. Both would carry approximately a half-dozen passengers up to space for a brief time of zero gravity before returning to the launch location. XCOR Aerospace had been developing the Lynx single-passenger spaceplane since the 2000s, but development was halted in 2017. Human representation and participation Participation and representation of humanity in space has been an issue ever since the first phase of space exploration. Some rights of non-spacefaring countries have been secured through international space law, declaring space the "province of all mankind", though the sharing of space by all humanity is sometimes criticized as imperialist and lacking. In addition to the lack of international inclusion, the inclusion of women and people of color has also been lacking. To make spaceflight more inclusive, organizations such as the Justspace Alliance and IAU-featured Inclusive Astronomy have been formed in recent years. Women The first woman to ever enter space was Valentina Tereshkova. She flew in 1963, but it was not until the 1980s that another woman entered space. At the time, all astronauts were required to be military test pilots; women were not able to enter this career, which is one reason for the delay in allowing women to join space crews. After the rules were changed, Svetlana Savitskaya became the second woman to enter space; she was also from the Soviet Union. Sally Ride became the next woman to enter space and the first woman to enter space through the United States program. Since then, eleven other countries have allowed women astronauts. The first all-female space walk occurred in 2018, by Christina Koch and Jessica Meir. These two women had both participated in separate space walks with NASA. The first mission to the Moon with a woman aboard is planned for 2024. Despite these developments, women are still underrepresented among astronauts and especially cosmonauts. Issues that block potential applicants from the programs, and limit the space missions they are able to go on, are, for example: agencies limiting women to half as much time in space than men, due to suppositions that women are at greater potential risk for cancer. a lack of space suits sized appropriately for female astronauts. Milestones By achievement 12 April 1961 Yuri Gagarin was the first human in space and the first in Earth orbit, on Vostok 1. 17 July 1962 or 19 July 1963 Either Robert M. White or Joseph A. Walker (depending on the definition of the space border) was the first to pilot a spaceplane, the North American X-15, on 17 July 1962 (White) or 19 July 1963 (Walker). 18 March 1965 Alexei Leonov was first to walk in space. 15 December 1965 Walter M. Schirra and Tom Stafford were first to perform a space rendezvous, piloting their Gemini 6A spacecraft to achieve station-keeping one foot (30 cm) from Gemini 7 for over 5 hours. 16 March 1966 Neil Armstrong and David Scott were first to rendezvous and dock, piloting their Gemini 8 spacecraft to dock with an uncrewed Agena Target Vehicle. 21–27 December 1968 Frank Borman, Jim Lovell, and William Anders were first to travel beyond low Earth orbit (LEO) and first to orbit the Moon, on the Apollo 8 mission, which orbited the Moon ten times before returning to Earth. 26 May 1969 Apollo 10 reaches the fastest speed ever traveled by a human: 39,897 km/h (11.08 km/s or 24,791 mph), or roughly 1/27,000 of lightspeed. 20 July 1969 Neil Armstrong and Buzz Aldrin were first to land on the Moon, during Apollo 11. Longest time in space Valeri Polyakov performed the longest single spaceflight, from 8 January 1994 to 22 March 1995 (437 days, 17 hours, 58 minutes, and 16 seconds). Gennady Padalka has spent the most total time in space on multiple missions, 879 days. Longest-duration crewed space station The International Space Station has the longest period of continuous human presence in space, 2 November 2000 to present (). This record was previously held by Mir, from Soyuz TM-8 on 5 September 1989 to the Soyuz TM-29 on 28 August 1999, a span of 3,644 days (almost 10 years). By nationality or sex 12 April 1961 Yuri Gagarin became the first Soviet and the first human to reach space, on Vostok 1. 5 May 1961 Alan Shepard became the first American to reach space, on Freedom 7. 20 February 1962 John Glenn became the first American to orbit the Earth. 16 June 1963 Valentina Tereshkova became the first woman to go into space and to orbit the Earth. 2 March 1978 Vladimír Remek, a Czechoslovakian, became the first non-American and non-Soviet in space, as part of the Interkosmos program. 2 April 1984 Rakesh Sharma, became the first Indian citizen to reach Earth's orbit. 25 July 1984 Svetlana Savitskaya became the first woman to walk in space. 15 October 2003 Yang Liwei became the first Chinese in space and to orbit the Earth, on Shenzhou 5. 18 October 2019 Christina Koch and Jessica Meir conducted the first woman-only walk in space. Sally Ride became the first American woman in space, in 1983. Eileen Collins was the first female Shuttle pilot, and with Shuttle mission STS-93 in 1999 she became the first woman to command a U.S. spacecraft. For many years, only the USSR (later Russia) and the United States were the only countries whose astronauts flew in space. That ended with the 1978 flight of Vladimir Remek. , citizens from 38 nations (including space tourists) have flown in space aboard Soviet, American, Russian, and Chinese spacecraft. Space programs Human spaceflight programs have been conducted by the Soviet Union–Russian Federation, the United States, Mainland China, and by American private spaceflight companies. Current programs The following space vehicles and spaceports are currently used for launching human spaceflights: Soyuz program (Russia): spacecraft on Soyuz launch vehicle, from Baikonur Cosmodrome; 146 crewed orbital flights since 1967, including two in-flight aborts which failed to reach orbit, . China Manned Space Program (China): Shenzhou spacecraft on Long March launch vehicle, from Jiuquan Satellite Launch Center; eight crewed orbital flights since 2003, . SpaceShipTwo (US): Air launched from White Knight Two carrier aircraft. The first two flights were from the Mojave Air and Space Port, with subsequent flights from Spaceport America. Four crewed suborbital flights since 2018, as of the end of July 2021. Crew Dragon (US): Launched from Kennedy Space Center on a Falcon 9 rocket. Four crewed orbital flights as of September 2021, both privately funded and as part of the Commercial Crew Program. New Shepard (US): Launched from a facility near Van Horn, Texas. Three crewed suborbital launches as of December 2021. The following space stations are currently maintained in Earth orbit for human occupation: International Space Station (US, Russia, Europe, Japan, Canada) assembled in orbit: altitude , 51.65° orbital inclination; crews transported by Soyuz or Crew Dragon spacecraft Tiangong Space Station (China) assembled in orbit: 41.5° orbital inclination; crews transported by Shenzhou spacecraft Most of the time, the only humans in space are those aboard the ISS, which generally has a crew of 7 except during crew transitions, and those aboard Tiangong, which has a crew of 3. NASA and ESA use the term "human spaceflight" to refer to their programs of launching people into space. These endeavors have also been referred to as "manned space missions", though because of gender specificity this is no longer official parlance according to NASA style guides. Planned future programs Under the Indian Human Spaceflight Program, India was planning to send humans into space on its orbital vehicle Gaganyaan before August 2022, but it has been delayed to 2023, due to the COVID-19 pandemic. The Indian Space Research Organisation (ISRO) began work on this project in 2006. The initial objective is to carry a crew of two or three to low Earth orbit (LEO) for a 3-to-7-day flight in a spacecraft on a GSLV Mk III rocket and return them safely for a water landing at a predefined landing zone. On 15 August 2018, Indian Prime Minister Narendra Modi, declared India will independently send humans into space before the 75th anniversary of independence in 2022. In 2019, ISRO revealed plans for a space station by 2030, followed by a crewed lunar mission. The program envisages the development of a fully-autonomous orbital vehicle capable of carrying 2 or 3 crew members to an about low Earth orbit and bringing them safely back home. Since 2008, the Japan Aerospace Exploration Agency has developed the H-II Transfer Vehicle cargo-spacecraft-based crewed spacecraft and Kibō Japanese Experiment Module–based small space laboratory. NASA is developing a plan to land humans on Mars by the 2030s. The first step will begin with Artemis 1 in 2021, sending an uncrewed Orion spacecraft to a distant retrograde orbit around the Moon and returning it to Earth after a 25-day mission. SpaceX is developing Starship, a fully reusable two stage system, with near-Earth and cislunar applications and an ultimate goal of landing on Mars. The upper stage of the Starship system, also called Starship, has had 9 atmospheric test flights as of September 2021. A modified version of Starship is being developed for the Artemis program. Several other countries and space agencies have announced and begun human spaceflight programs using natively developed equipment and technology, including Japan (JAXA), Iran (ISA), and North Korea (NADA). The plans for the Iranian crewed spacecraft are for a small spacecraft and space laboratory. North Korea's space program has plans for crewed spacecraft and small shuttle systems. National spacefaring attempts This section lists all nations which have attempted human spaceflight programs. This should not to be confused with nations with citizens who have traveled into space, including space tourists, flown or intending to fly by a foreign country's or non-domestic private company's space systems – who are not counted in this list toward their country's national spacefaring attempts. Safety concerns There are two main sources of hazard in space flight: those due to the hostile space environment, and those due to possible equipment malfunctions. Addressing these issues is of great importance for NASA and other space agencies before conducting the first extended crewed missions to destinations such as Mars. Environmental hazards Planners of human spaceflight missions face a number of safety concerns. Life support The basic needs for breathable air and drinkable water are addressed by the life support system of the spacecraft. Medical issues Astronauts may not be able to quickly return to Earth or receive medical supplies, equipment, or personnel if a medical emergency occurs. The astronauts may have to rely for long periods on limited resources and medical advice from the ground. The possibility of blindness and of bone loss have been associated with human space flight. On 31 December 2012, a NASA-supported study reported that spaceflight may harm the brains of astronauts and accelerate the onset of Alzheimer's disease. In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, which included the potential hazards of a human mission to Mars. On 2 November 2017, scientists reported, based on MRI studies, that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space. Astronauts on longer space trips were affected by greater brain changes. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to assure a healthy environment for astronauts. In March 2019, NASA reported that latent viruses in humans may be activated during space missions, possibly adding more risk to astronauts in future deep-space missions. On 25 September 2021, CNN reported that an alarm had sounded during the Inspiration4 Earth-orbital journey on the SpaceX Dragon 2. The alarm signal was found to be associated with an apparent toilet malfunction. Microgravity Medical data from astronauts in low Earth orbits for long periods, dating back to the 1970s, show several adverse effects of a microgravity environment: loss of bone density, decreased muscle strength and endurance, postural instability, and reductions in aerobic capacity. Over time these deconditioning effects can impair astronauts' performance or increase their risk of injury. In a weightless environment, astronauts put almost no weight on the back muscles or leg muscles used for standing up, which causes the muscles to weaken and get smaller. Astronauts can lose up to twenty per cent of their muscle mass on spaceflights lasting five to eleven days. The consequent loss of strength could be a serious problem in case of a landing emergency. Upon returning to Earth from long-duration flights, astronauts are considerably weakened and are not allowed to drive a car for twenty-one days. Astronauts experiencing weightlessness will often lose their orientation, get motion sickness, and lose their sense | to pilot a spaceplane, the North American X-15, on 17 July 1962 (White) or 19 July 1963 (Walker). 18 March 1965 Alexei Leonov was first to walk in space. 15 December 1965 Walter M. Schirra and Tom Stafford were first to perform a space rendezvous, piloting their Gemini 6A spacecraft to achieve station-keeping one foot (30 cm) from Gemini 7 for over 5 hours. 16 March 1966 Neil Armstrong and David Scott were first to rendezvous and dock, piloting their Gemini 8 spacecraft to dock with an uncrewed Agena Target Vehicle. 21–27 December 1968 Frank Borman, Jim Lovell, and William Anders were first to travel beyond low Earth orbit (LEO) and first to orbit the Moon, on the Apollo 8 mission, which orbited the Moon ten times before returning to Earth. 26 May 1969 Apollo 10 reaches the fastest speed ever traveled by a human: 39,897 km/h (11.08 km/s or 24,791 mph), or roughly 1/27,000 of lightspeed. 20 July 1969 Neil Armstrong and Buzz Aldrin were first to land on the Moon, during Apollo 11. Longest time in space Valeri Polyakov performed the longest single spaceflight, from 8 January 1994 to 22 March 1995 (437 days, 17 hours, 58 minutes, and 16 seconds). Gennady Padalka has spent the most total time in space on multiple missions, 879 days. Longest-duration crewed space station The International Space Station has the longest period of continuous human presence in space, 2 November 2000 to present (). This record was previously held by Mir, from Soyuz TM-8 on 5 September 1989 to the Soyuz TM-29 on 28 August 1999, a span of 3,644 days (almost 10 years). By nationality or sex 12 April 1961 Yuri Gagarin became the first Soviet and the first human to reach space, on Vostok 1. 5 May 1961 Alan Shepard became the first American to reach space, on Freedom 7. 20 February 1962 John Glenn became the first American to orbit the Earth. 16 June 1963 Valentina Tereshkova became the first woman to go into space and to orbit the Earth. 2 March 1978 Vladimír Remek, a Czechoslovakian, became the first non-American and non-Soviet in space, as part of the Interkosmos program. 2 April 1984 Rakesh Sharma, became the first Indian citizen to reach Earth's orbit. 25 July 1984 Svetlana Savitskaya became the first woman to walk in space. 15 October 2003 Yang Liwei became the first Chinese in space and to orbit the Earth, on Shenzhou 5. 18 October 2019 Christina Koch and Jessica Meir conducted the first woman-only walk in space. Sally Ride became the first American woman in space, in 1983. Eileen Collins was the first female Shuttle pilot, and with Shuttle mission STS-93 in 1999 she became the first woman to command a U.S. spacecraft. For many years, only the USSR (later Russia) and the United States were the only countries whose astronauts flew in space. That ended with the 1978 flight of Vladimir Remek. , citizens from 38 nations (including space tourists) have flown in space aboard Soviet, American, Russian, and Chinese spacecraft. Space programs Human spaceflight programs have been conducted by the Soviet Union–Russian Federation, the United States, Mainland China, and by American private spaceflight companies. Current programs The following space vehicles and spaceports are currently used for launching human spaceflights: Soyuz program (Russia): spacecraft on Soyuz launch vehicle, from Baikonur Cosmodrome; 146 crewed orbital flights since 1967, including two in-flight aborts which failed to reach orbit, . China Manned Space Program (China): Shenzhou spacecraft on Long March launch vehicle, from Jiuquan Satellite Launch Center; eight crewed orbital flights since 2003, . SpaceShipTwo (US): Air launched from White Knight Two carrier aircraft. The first two flights were from the Mojave Air and Space Port, with subsequent flights from Spaceport America. Four crewed suborbital flights since 2018, as of the end of July 2021. Crew Dragon (US): Launched from Kennedy Space Center on a Falcon 9 rocket. Four crewed orbital flights as of September 2021, both privately funded and as part of the Commercial Crew Program. New Shepard (US): Launched from a facility near Van Horn, Texas. Three crewed suborbital launches as of December 2021. The following space stations are currently maintained in Earth orbit for human occupation: International Space Station (US, Russia, Europe, Japan, Canada) assembled in orbit: altitude , 51.65° orbital inclination; crews transported by Soyuz or Crew Dragon spacecraft Tiangong Space Station (China) assembled in orbit: 41.5° orbital inclination; crews transported by Shenzhou spacecraft Most of the time, the only humans in space are those aboard the ISS, which generally has a crew of 7 except during crew transitions, and those aboard Tiangong, which has a crew of 3. NASA and ESA use the term "human spaceflight" to refer to their programs of launching people into space. These endeavors have also been referred to as "manned space missions", though because of gender specificity this is no longer official parlance according to NASA style guides. Planned future programs Under the Indian Human Spaceflight Program, India was planning to send humans into space on its orbital vehicle Gaganyaan before August 2022, but it has been delayed to 2023, due to the COVID-19 pandemic. The Indian Space Research Organisation (ISRO) began work on this project in 2006. The initial objective is to carry a crew of two or three to low Earth orbit (LEO) for a 3-to-7-day flight in a spacecraft on a GSLV Mk III rocket and return them safely for a water landing at a predefined landing zone. On 15 August 2018, Indian Prime Minister Narendra Modi, declared India will independently send humans into space before the 75th anniversary of independence in 2022. In 2019, ISRO revealed plans for a space station by 2030, followed by a crewed lunar mission. The program envisages the development of a fully-autonomous orbital vehicle capable of carrying 2 or 3 crew members to an about low Earth orbit and bringing them safely back home. Since 2008, the Japan Aerospace Exploration Agency has developed the H-II Transfer Vehicle cargo-spacecraft-based crewed spacecraft and Kibō Japanese Experiment Module–based small space laboratory. NASA is developing a plan to land humans on Mars by the 2030s. The first step will begin with Artemis 1 in 2021, sending an uncrewed Orion spacecraft to a distant retrograde orbit around the Moon and returning it to Earth after a 25-day mission. SpaceX is developing Starship, a fully reusable two stage system, with near-Earth and cislunar applications and an ultimate goal of landing on Mars. The upper stage of the Starship system, also called Starship, has had 9 atmospheric test flights as of September 2021. A modified version of Starship is being developed for the Artemis program. Several other countries and space agencies have announced and begun human spaceflight programs using natively developed equipment and technology, including Japan (JAXA), Iran (ISA), and North Korea (NADA). The plans for the Iranian crewed spacecraft are for a small spacecraft and space laboratory. North Korea's space program has plans for crewed spacecraft and small shuttle systems. National spacefaring attempts This section lists all nations which have attempted human spaceflight programs. This should not to be confused with nations with citizens who have traveled into space, including space tourists, flown or intending to fly by a foreign country's or non-domestic private company's space systems – who are not counted in this list toward their country's national spacefaring attempts. Safety concerns There are two main sources of hazard in space flight: those due to the hostile space environment, and those due to possible equipment malfunctions. Addressing these issues is of great importance for NASA and other space agencies before conducting the first extended crewed missions to destinations such as Mars. Environmental hazards Planners of human spaceflight missions face a number of safety concerns. Life support The basic needs for breathable air and drinkable water are addressed by the life support system of the spacecraft. Medical issues Astronauts may not be able to quickly return to Earth or receive medical supplies, equipment, or personnel if a medical emergency occurs. The astronauts may have to rely for long periods on limited resources and medical advice from the ground. The possibility of blindness and of bone loss have been associated with human space flight. On 31 December 2012, a NASA-supported study reported that spaceflight may harm the brains of astronauts and accelerate the onset of Alzheimer's disease. In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, which included the potential hazards of a human mission to Mars. On 2 November 2017, scientists reported, based on MRI studies, that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space. Astronauts on longer space trips were affected by greater brain changes. Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to assure a healthy environment for astronauts. In March 2019, NASA reported that latent viruses in humans may be activated during space missions, possibly adding more risk to astronauts in future deep-space missions. On 25 September 2021, CNN reported that an alarm had sounded during the Inspiration4 Earth-orbital journey on the SpaceX Dragon 2. The alarm signal was found to be associated with an apparent toilet malfunction. Microgravity Medical data from astronauts in low Earth orbits for long periods, dating back to the 1970s, show several adverse effects of a microgravity environment: loss of bone density, decreased muscle strength and endurance, postural instability, and reductions in aerobic capacity. Over time these deconditioning effects can impair astronauts' performance or increase their risk of injury. In a weightless environment, astronauts put almost no weight on the back muscles or leg muscles used for standing up, which causes the muscles to weaken and get smaller. Astronauts can lose up to twenty per cent of their muscle mass on spaceflights lasting five to eleven days. The consequent loss of strength could be a serious problem in case of a landing emergency. Upon returning to Earth from long-duration flights, astronauts are considerably weakened and are not allowed to drive a car for twenty-one days. Astronauts experiencing weightlessness will often lose their orientation, get motion sickness, and lose their sense of direction as their bodies try to get used to a weightless environment. When they get back to Earth, they have to readjust and may have problems standing up, focusing their gaze, walking, and turning. Importantly, those motor disturbances only get worse the longer the exposure to weightlessness. These changes can affect the ability to perform tasks required for approach and landing, docking, remote manipulation, and emergencies that may occur while landing. In addition, after long space flight missions, male astronauts may experience severe eyesight problems, which may be a major concern for future deep space flight missions, including a crewed mission to the planet Mars. Long space flights can also alter a space traveler's eye movements. Radiation Without proper shielding, the crews of missions beyond low Earth orbit might be at risk from high-energy protons emitted by solar particle events (SPEs) associated with solar flares. Radiation doses astronauts would receive from a solar storm similar to that of the most powerful in recorded history, the Carrington Event, have been estimated to be able to cause acute radiation sickness and possibly even death. Another storm that could have inflicted a lethal radiation dose on astronauts outside Earth's protective magnetosphere occurred during the Space Age, shortly after Apollo 16 landed and before Apollo 17 launched. This solar storm of August 1972 would likely have caused acute illness, at least. Another type of radiation, galactic cosmic rays, presents further challenges to human spaceflight beyond low Earth orbit. There is also some scientific concern that extended spaceflight might slow down the body's ability to protect itself against diseases, resulting in a weakened immune system and the activation of dormant viruses in the body. Radiation can cause both short- and long-term consequences to the bone marrow stem cells from which blood and immune-system cells are created. Because the interior of a spacecraft is so small, a weakened immune system and more active viruses in the body can lead to a fast spread of infection. Isolation During long missions, astronauts are isolated and confined in small spaces. Depression, anxiety, cabin fever, and other psychological problems may occur more than for an average person and could impact the crew's safety and mission success. NASA spends millions of dollars on psychological treatments for astronauts and former astronauts. To date, there is no way to prevent or reduce mental problems caused by extended periods of stay in space. Due to these mental disorders, the efficiency of astronauts' work is impaired; and sometimes |
to produce it today. It was named after Dmitri Mendeleev, father of the periodic table of the chemical elements. Using available microgram quantities of the isotope einsteinium-253, over a million mendelevium atoms may be produced each hour. The chemistry of mendelevium is typical for the late actinides, with a preponderance of the +3 oxidation state but also an accessible +2 oxidation state. All known isotopes of mendelevium have relatively short half-lives; there are currently no uses for it outside basic scientific research, and only small amounts are produced. Discovery Mendelevium was the ninth transuranic element to be synthesized. It was first synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey, and team leader Stanley G. Thompson in early 1955 at the University of California, Berkeley. The team produced 256Md (half-life of 77 minutes) when they bombarded an 253Es target consisting of only a billion (109) einsteinium atoms with alpha particles (helium nuclei) in the Berkeley Radiation Laboratory's 60-inch cyclotron, thus increasing the target's atomic number by two. 256Md thus became the first isotope of any element to be synthesized one atom at a time. In total, seventeen mendelevium atoms were produced. This discovery was part of a program, begun in 1952, that irradiated plutonium with neutrons to transmute it into heavier actinides. This method was necessary as the previous method used to synthesize transuranic elements, neutron capture, could not work because of a lack of known beta decaying isotopes of fermium that would produce isotopes of the next element, mendelevium, and also due to the very short half-life to spontaneous fission of 258Fm that thus constituted a hard limit to the success of the neutron capture process. To predict if the production of mendelevium would be possible, the team made use of a rough calculation. The number of atoms that would be produced would be approximately equal to the product of the number of atoms of target material, the target's cross section, the ion beam intensity, and the time of bombardment; this last factor was related to the half-life of the product when bombarding for a time on the order of its half-life. This gave one atom per experiment. Thus under optimum conditions, the preparation of only one atom of element 101 per experiment could be expected. This calculation demonstrated that it was feasible to go ahead with the experiment. The target material, einsteinium-253, could be produced readily from irradiating plutonium: one year of irradiation would give a billion atoms, and its three-week half-life meant that the element 101 experiments could be conducted in one week after the produced einsteinium was separated and purified to make the target. However, it was necessary to upgrade the cyclotron to obtain the needed intensity of 1014 alpha particles per second; Seaborg applied for the necessary funds. While Seaborg applied for funding, Harvey worked on the einsteinium target, while Thomson and Choppin focused on methods for chemical isolation. Choppin suggested using α-hydroxyisobutyric acid to separate the mendelevium atoms from those of the lighter actinides. The actual synthesis was done by a recoil technique, introduced by Albert Ghiorso. In this technique, the einsteinium was placed on the opposite side of the target from the beam, so that the recoiling mendelevium atoms would get enough momentum to leave the target and be caught on a catcher foil made of gold. This recoil target was made by an electroplating technique, developed by Alfred Chetham-Strode. This technique gave a very high yield, which was absolutely necessary when working with such a rare and valuable product as the einsteinium target material. The recoil target consisted of 109 atoms of 253Es which were deposited electrolytically on a thin gold foil. It was bombarded by 41 MeV alpha particles in the Berkeley cyclotron with a very high beam density of 6×1013 particles per second over an area of 0.05 cm2. The target was cooled by water or liquid helium, and the foil could be replaced. Initial experiments were carried out in September 1954. No alpha decay was seen from mendelevium atoms; thus, Ghiorso suggested that the mendelevium had all decayed by electron capture to fermium and that the experiment should be repeated to search instead for spontaneous fission events. The repetition of the experiment happened in February 1955. On the day of discovery, 19 February, alpha irradiation of the einsteinium target occurred in three three-hour sessions. The cyclotron was in the University of California campus, while the Radiation Laboratory was on the next hill. To deal with this situation, a complex procedure was used: Ghiorso took the catcher foils (there were three targets and three foils) from the cyclotron to Harvey, who would use aqua regia to dissolve it and pass it through an anion-exchange resin column to separate out the transuranium elements from the gold and other products. The resultant drops entered a test tube, which Choppin and Ghiorso took in a car to get to the Radiation Laboratory as soon as possible. There Thompson and Choppin used a cation-exchange resin column and the α-hydroxyisobutyric acid. The solution drops were collected on platinum disks and dried under heat lamps. The three disks were expected to contain respectively the fermium, no new elements, and the mendelevium. Finally, they were placed in their own counters, which were connected to recorders such that spontaneous fission events would be recorded as huge deflections in a graph showing the number and time of the decays. There thus was no direct detection, but by observation of spontaneous fission events arising from its electron-capture daughter 256Fm. The first one was identified with a "hooray" followed by a "double hooray" and a "triple hooray". The fourth one eventually officially proved the chemical identification of the 101st element, mendelevium. In total, five decays were reported up till 4 a.m. Seaborg was notified and the team left to sleep. Additional analysis and further experimentation showed the produced mendelevium isotope to have mass 256 and to decay by electron capture to fermium-256 with a half-life of 1.5 h. Being the first of the second hundred of the chemical elements, it was decided that the element would be named "mendelevium" after the Russian chemist Dmitri Mendeleev, father of the periodic table. Because this discovery came during the Cold War, Seaborg had to request permission of the government of the United States to propose that the element be named for a Russian, but it was granted. The name "mendelevium" was accepted by the International Union of Pure and Applied Chemistry (IUPAC) in 1955 with symbol "Mv", which was changed to "Md" in the next IUPAC General Assembly (Paris, 1957). Characteristics Physical In the periodic table, mendelevium is located to the right of the actinide fermium, to the left of the actinide nobelium, and below the lanthanide thulium. Mendelevium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties. The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fns2 configurations, whereas the latter have fn−1d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f126d17s2 configuration over the [Rn]5f137s2 configuration for mendelevium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number. Thermochromatographic studies with trace quantities of mendelevium by Zvara and Hübener from 1976 to 1982 confirmed this prediction. In 1990, Haire and Gibson estimated mendelevium metal to have an enthalpy of sublimation between 134 and | mass 256 and to decay by electron capture to fermium-256 with a half-life of 1.5 h. Being the first of the second hundred of the chemical elements, it was decided that the element would be named "mendelevium" after the Russian chemist Dmitri Mendeleev, father of the periodic table. Because this discovery came during the Cold War, Seaborg had to request permission of the government of the United States to propose that the element be named for a Russian, but it was granted. The name "mendelevium" was accepted by the International Union of Pure and Applied Chemistry (IUPAC) in 1955 with symbol "Mv", which was changed to "Md" in the next IUPAC General Assembly (Paris, 1957). Characteristics Physical In the periodic table, mendelevium is located to the right of the actinide fermium, to the left of the actinide nobelium, and below the lanthanide thulium. Mendelevium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties. The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fns2 configurations, whereas the latter have fn−1d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f126d17s2 configuration over the [Rn]5f137s2 configuration for mendelevium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number. Thermochromatographic studies with trace quantities of mendelevium by Zvara and Hübener from 1976 to 1982 confirmed this prediction. In 1990, Haire and Gibson estimated mendelevium metal to have an enthalpy of sublimation between 134 and 142 kJ/mol. Divalent mendelevium metal should have a metallic radius of around . Like the other divalent late actinides (except the once again trivalent lawrencium), metallic mendelevium should assume a face-centered cubic crystal structure. Mendelevium's melting point has been estimated at 827 °C, the same value as that predicted for the neighboring element nobelium. Its density is predicted to be around . Chemical The chemistry of mendelevium is mostly known only in solution, in which it can take on the +3 or +2 oxidation states. The +1 state has also been reported, but has not yet been confirmed. Before mendelevium's discovery, Seaborg and Katz predicted that it should be predominantly trivalent in aqueous solution and hence should behave similarly to other tripositive lanthanides and actinides. After the synthesis of mendelevium in 1955, these predictions were confirmed, first in the observation at its discovery that it eluted just after fermium in the trivalent actinide elution sequence from a cation-exchange column of resin, and later the 1967 observation that mendelevium could form insoluble hydroxides and fluorides that coprecipitated with trivalent lanthanide salts. Cation-exchange and solvent extraction studies led to the conclusion that mendelevium was a trivalent actinide with an ionic radius somewhat smaller than that of the previous actinide, fermium. Mendelevium can form coordination complexes with 1,2-cyclohexanedinitrilotetraacetic acid (DCTA). In reducing conditions, mendelevium(III) can be easily reduced to mendelevium(II), which is stable in aqueous solution. The standard reduction potential of the E°(Md3+→Md2+) couple was variously estimated in 1967 as −0.10 V or −0.20 V: later 2013 experiments established the value as . In comparison, E°(Md3+→Md0) should be around −1.74 V, and E°(Md2+→Md0) should be around −2.5 V. Mendelevium(II)'s elution behavior has been compared with that of strontium(II) and europium(II). In 1973, mendelevium(I) was reported to have been produced by Russian scientists, who obtained it by reducing higher oxidation states of mendelevium with samarium(II). It was found to be stable in neutral water–ethanol solution and be homologous to caesium(I). However, later experiments found no evidence for mendelevium(I) and found that mendelevium behaved like divalent elements when reduced, not like the monovalent alkali metals. Nevertheless, the Russian team conducted further studies on the thermodynamics of cocrystallizing mendelevium with alkali metal chlorides, and concluded that mendelevium(I) had formed and could form mixed crystals with divalent elements, thus cocrystallizing with them. The status of the +1 oxidation state is still tentative. The electrode potential E°(Md4+→Md3+) was predicted in 1975 to be +5.4 V; 1967 experiments with the strong oxidizing agent sodium bismuthate were unable to oxidize mendelevium(III) to mendelevium(IV). Atomic A mendelevium atom has 101 electrons, of which at least three (and perhaps four) can act as valence electrons. They are expected to be arranged in the configuration [Rn]5f137s2 (ground state term symbol 2F7/2), although experimental verification of this electron configuration had not yet been made as of 2006. In forming compounds, three valence electrons may be lost, leaving behind a [Rn]5f12 core: this conforms to the trend set by the other actinides with their [Rn] 5fn electron configurations in the tripositive state. The first ionization potential of mendelevium was measured to be at most (6.58 ± 0.07) eV in 1974, based on the assumption that the 7s electrons would ionize before the 5f ones; this value has since not yet been refined further due to mendelevium's scarcity and high radioactivity. The ionic radius of hexacoordinate Md3+ had been preliminarily estimated in 1978 to be around 91.2 pm; 1988 calculations based on the logarithmic trend between distribution coefficients and ionic radius produced a value of 89.6 pm, as well as an enthalpy of hydration of . Md2+ should have an ionic radius of 115 pm and hydration enthalpy −1413 kJ/mol; Md+ should have ionic radius 117 pm. Isotopes Seventeen isotopes of mendelevium are known, with mass numbers from 244 to 260; all are radioactive. Additionally, five nuclear isomers are known: 245mMd, 247mMd, 249mMd, 254mMd, and 258mMd. Of these, the longest-lived isotope is 258Md with a half-life of 51.5 days, and the longest-lived isomer is 258mMd with a half-life of 58.0 minutes. Nevertheless, the shorter-lived 256Md (half-life 1.17 hours) is more often used in chemical experimentation because it can be produced in larger quantities from alpha particle irradiation of einsteinium. After 258Md, the next most stable mendelevium isotopes are 260Md with a half-life of 31.8 days, 257Md with a half-life of 5.52 hours, 259Md with a half-life of 1.60 hours, and 256Md with a half-life of 1.17 hours. All of the remaining mendelevium isotopes have half-lives that are less than an hour, and the majority of these have half-lives that are less than 5 minutes. The half-lives of mendelevium isotopes mostly increase smoothly from 244Md onwards, reaching a maximum at 258Md. Experiments and predictions suggest that the half-lives will then decrease, apart from 260Md with a half-life of 31.8 days, as spontaneous fission becomes the dominant decay mode due to the mutual repulsion of the protons posing a limit to the island of relative stability of long-lived nuclei in the actinide series. Mendelevium-256, the chemically most important isotope of mendelevium, decays through electron capture 90% of the time and alpha decay 10% of the time. It is most easily detected through the spontaneous fission of its electron capture daughter fermium-256, but in the presence of other nuclides that undergo spontaneous fission, alpha decays at the characteristic energies for mendelevium-256 (7.205 and 7.139 MeV) can provide more useful identification. Production and isolation The lightest mendelevium isotopes (244Md to 247Md) are mostly produced through bombardment of bismuth targets with heavy argon ions, while slightly heavier ones (248Md to 253Md) are produced by bombarding plutonium and americium targets with lighter ions of carbon and nitrogen. The most important and most stable isotopes are in the range from 254Md to 258Md and are produced through bombardment of einsteinium isotopes with alpha particles: einsteinium-253, -254, and -255 can all be used. 259Md is produced as a daughter of 259No, and 260Md can be produced in a transfer reaction between einsteinium-254 and oxygen-18. Typically, the most commonly used isotope 256Md is produced by bombarding either einsteinium-253 or -254 with alpha particles: einsteinium-254 is preferred when available because it has a longer half-life and therefore can be used as a target for longer. Using available microgram quantities of einsteinium, femtogram quantities of mendelevium-256 may be produced. The recoil momentum of the produced mendelevium-256 atoms is used to bring them physically far away from the einsteinium target from which they are produced, bringing them onto a thin foil of metal (usually beryllium, aluminium, platinum, or gold) just behind the target in a vacuum. This eliminates the need for immediate chemical separation, which is both costly and prevents reusing of the expensive einsteinium target. The mendelevium atoms are then trapped in a gas atmosphere (frequently helium), and a gas jet from a small opening in the reaction chamber carries the mendelevium along. Using a long capillary tube, and including potassium chloride aerosols in the helium gas, the mendelevium atoms can be transported over tens of meters to be chemically analyzed and have their quantity determined. The mendelevium can then be separated from the foil material and other fission products by applying acid to the foil and then coprecipitating the mendelevium with lanthanum fluoride, then using |
consequence of P and P → Q in some logical system. Justification via truth table The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table. In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true. Status While modus ponens is one of the most commonly used argument forms in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution". Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment or the law of detachment. Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones", and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] ... an inference is the dropping of a true premise; it is the dissolution of an implication". A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error". In other words: if one statement or proposition implies a second one, and the first statement or proposition is true, then the second one is also true. If P implies Q and P is true, then Q is true. Correspondence to other mathematical frameworks Probability calculus Modus ponens represents an instance of the Law of total probability which for a binary variable is expressed as: , where e.g. denotes the probability of and the conditional probability generalizes the logical implication . Assume that is equivalent to being TRUE, and that is equivalent to being FALSE. It is then easy to see that when and | ponens in classical two-valued logic can be clearly demonstrated by use of a truth table. In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true. Status While modus ponens is one of the most commonly used argument forms in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution". Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment or the law of detachment. Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones", and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] ... an inference is the dropping of a true premise; it is the dissolution of an implication". A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error". In other words: if one statement or proposition implies a second one, and the first statement or proposition is true, then the second one is also true. If P implies Q and P is true, then Q is true. Correspondence to other mathematical frameworks Probability calculus Modus ponens represents an instance of the Law of total probability which for a binary variable is expressed as: , where e.g. denotes the probability of and the conditional probability generalizes the logical implication . Assume that is equivalent to being TRUE, and that is equivalent to being FALSE. It is then easy to see that when and . Hence, the law of total probability represents a generalization of modus ponens. Subjective logic Modus ponens represents an instance of the binomial deduction operator in subjective logic expressed as: , where denotes the subjective opinion about as expressed by source , and the conditional opinion generalizes the logical implication . The deduced marginal opinion about is denoted by . The case where is an absolute TRUE opinion about is equivalent to source saying that is TRUE, and the case where is an absolute FALSE opinion about is equivalent to source saying that is FALSE. The deduction operator of subjective logic produces an absolute TRUE deduced opinion when the conditional opinion is absolute TRUE and the antecedent opinion is absolute TRUE. Hence, subjective logic deduction represents a generalization of both modus ponens and the Law of total probability. Alleged cases of failure Philosophers and linguists have identified a variety of cases where modus ponens appears to fail. One famous putative counterexample was identified by |
a conclusion: If P, then Q. Not Q. Therefore, not P. The first premise is a conditional ("if-then") claim, such as P implies Q. The second premise is an assertion that Q, the consequent of the conditional claim, is not the case. From these two premises it can be logically concluded that P, the antecedent of the conditional claim, is also not the case. For example: If the dog detects an intruder, the dog will bark. The dog did not bark. Therefore, no intruder was detected by the dog. Supposing that the premises are both true (the dog will bark if it detects an intruder, and does indeed not bark), it follows that no intruder has been detected. This is a valid argument since it is not possible for the conclusion to be false if the premises are true. (It is conceivable that there may have been an intruder that the dog did not detect, but that does not invalidate the argument; the first premise is "if the dog detects an intruder". The thing of importance is that the dog detects or does not detect an intruder, not whether there is one.) Another example: If I am the axe murderer, then I can use an axe. I cannot use an axe. Therefore, I am not the axe murderer. Another example: If Rex is a chicken, then he is a bird. Rex is not a bird. Therefore, Rex is not a chicken. Relation to modus ponens Every use of modus tollens can be converted to a use of modus ponens and one use of transposition to the premise which is a material implication. For example: If P, then Q. (premise – material implication) If not Q, then not P. (derived by transposition) Not Q . (premise) Therefore, not P. (derived by modus ponens) Likewise, every use of modus ponens can be converted to a use of modus tollens and transposition. Formal notation The modus tollens rule can be stated formally as: where stands for the statement "P implies Q". stands for "it is not the case that Q" (or in brief "not Q"). Then, whenever "" and "" each appear by themselves as a line of a proof, then "" can validly be placed on a subsequent line. The modus tollens rule may be written in sequent notation: where is a metalogical symbol meaning that is a syntactic consequence of and in some logical system; or as the statement of a functional tautology or theorem of propositional logic: where and are propositions expressed in some formal system; or including assumptions: though since the rule does not change the set of assumptions, this is not strictly necessary. More complex rewritings involving modus tollens | line—which satisfies these two conditions. In this line, p is false. Therefore, in every instance in which p → q is true and q is false, p must also be false. Formal proof Via disjunctive syllogism Via reductio ad absurdum Via contraposition Correspondence to other mathematical frameworks Probability calculus Modus tollens represents an instance of the law of total probability combined with Bayes' theorem expressed as: , where the conditionals and are obtained with (the extended form of) Bayes' theorem expressed as: and . In the equations above denotes the probability of , and denotes the base rate (aka. prior probability) of . The conditional probability generalizes the logical statement , i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. Assume that is equivalent to being TRUE, and that is equivalent to being FALSE. It is then easy to see that when and . This is because so that in the last equation. Therefore, the product terms in the first equation always have a zero factor so that which is equivalent to being FALSE. Hence, the law of total probability combined with Bayes' theorem represents a generalization of modus tollens. Subjective logic Modus tollens represents an instance of the abduction operator in subjective logic expressed as: , where denotes the subjective opinion about , and denotes a pair of binomial conditional opinions, as expressed by source . The parameter denotes the base rate (aka. the prior probability) of . The abduced marginal opinion on is denoted . The conditional opinion generalizes the logical statement , i.e. in addition to assigning TRUE or FALSE the source can assign any subjective opinion to the statement. The case where is an absolute TRUE opinion is equivalent to source |
wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria punished her, presuming she was involved, by stripping her naked and scraping off her skin with clamshells (some say roofing tiles). Science and mathematics in the Islamic world during the Middle Ages followed various models and modes of funding varied based primarily on scholars. It was extensive patronage and strong intellectual policies implemented by specific rulers that allowed scientific knowledge to develop in many areas. Funding for translation of scientific texts in other languages was ongoing throughout the reign of certain caliphs, and it turned out that certain scholars became experts in the works they translated and in turn received further support for continuing to develop certain sciences. As these sciences received wider attention from the elite, more scholars were invited and funded to study particular sciences. An example of a translator and mathematician who benefited from this type of support was al-Khawarizmi. A notable feature of many scholars working under Muslim rule in medieval times is that they were often polymaths. Examples include the work on optics, maths and astronomy of Ibn al-Haytham. The Renaissance brought an increased emphasis on mathematics and science to Europe. During this period of transition from a mainly feudal and ecclesiastical culture to a predominantly secular one, many notable mathematicians had other occupations: Luca Pacioli (founder of accounting); Niccolò Fontana Tartaglia (notable engineer and bookkeeper); Gerolamo Cardano (earliest founder of probability and binomial expansion); Robert Recorde (physician) and François Viète (lawyer). As time passed, many mathematicians gravitated towards universities. An emphasis on free thinking and experimentation had begun in Britain's oldest universities beginning in the seventeenth century at Oxford with the scientists Robert Hooke and Robert Boyle, and at Cambridge where Isaac Newton was Lucasian Professor of Mathematics & Physics. Moving into the 19th century, the objective of universities all across Europe evolved from teaching the "regurgitation of knowledge" to "encourag[ing] productive thinking." In 1810, Humboldt convinced the King of Prussia to build a university in Berlin based on Friedrich Schleiermacher's liberal ideas; the goal was to demonstrate the process of the discovery of knowledge and to teach students to "take account of fundamental laws of science in all their thinking." Thus, seminars and laboratories started to evolve. British universities of this period adopted some approaches familiar to the Italian and German universities, but as they already enjoyed substantial freedoms and autonomy the changes there had begun with the Age of Enlightenment, the same influences that inspired Humboldt. The Universities of Oxford and Cambridge emphasized the importance of research, arguably more authentically implementing Humboldt's idea of a university than even German universities, which were subject to state authority. Overall, science (including mathematics) became the focus of universities in the 19th and 20th centuries. Students could conduct research in seminars or laboratories and began to produce doctoral theses with more scientific content. According to Humboldt, the mission of the University of Berlin was to pursue scientific knowledge. The German university system fostered professional, bureaucratically regulated scientific research performed in well-equipped laboratories, instead of the kind of research done by private and individual scholars in Great Britain and France. In fact, Rüegg asserts that the German system is responsible for the development of the modern research university because it focused on the idea of "freedom of scientific research, teaching and study." Required education Mathematicians usually cover a breadth of topics within mathematics in their undergraduate education, and then proceed to specialize in topics of their own choice at the graduate level. In some universities, a qualifying exam serves to test both the breadth and depth of a student's understanding of mathematics; the students, who pass, are permitted to work on a doctoral dissertation. Activities Applied mathematics Mathematicians involved with solving problems with applications in real life are called applied mathematicians. Applied mathematicians are mathematical scientists who, with their specialized knowledge and professional methodology, approach many of the imposing problems presented in related scientific fields. With professional focus on a wide variety of problems, theoretical systems, and localized constructs, applied mathematicians work regularly in the study and formulation of mathematical models. Mathematicians and applied mathematicians are considered to be two of the STEM (science, technology, engineering, and mathematics) careers. The discipline of applied mathematics concerns itself with mathematical methods that are typically used in science, engineering, business, and industry; thus, "applied mathematics" is a mathematical science with specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on problems, often concrete but sometimes abstract. As professionals focused on problem solving, applied mathematicians look into the formulation, study, and use of mathematical models in science, engineering, business, and other areas of mathematical practice. Pure mathematics Pure mathematics is mathematics that studies entirely abstract concepts. From the eighteenth century onwards, this was a recognized category of mathematical activity, sometimes characterized as speculative mathematics, and at variance with the trend towards meeting the needs of navigation, astronomy, physics, economics, engineering, and other applications. Another insightful view put forth is that pure mathematics is not necessarily applied mathematics: it is possible to study abstract entities with respect to their intrinsic nature, and not be concerned with how they manifest in the real world. Even though the pure and applied viewpoints are distinct philosophical positions, in practice there is much overlap in the activity of pure and applied mathematicians. To develop | 507 BC) established the Pythagorean School, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number". It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins. The first woman mathematician recorded by history was Hypatia of Alexandria (AD 350 – 415). She succeeded her father as Librarian at the Great Library and wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria punished her, presuming she was involved, by stripping her naked and scraping off her skin with clamshells (some say roofing tiles). Science and mathematics in the Islamic world during the Middle Ages followed various models and modes of funding varied based primarily on scholars. It was extensive patronage and strong intellectual policies implemented by specific rulers that allowed scientific knowledge to develop in many areas. Funding for translation of scientific texts in other languages was ongoing throughout the reign of certain caliphs, and it turned out that certain scholars became experts in the works they translated and in turn received further support for continuing to develop certain sciences. As these sciences received wider attention from the elite, more scholars were invited and funded to study particular sciences. An example of a translator and mathematician who benefited from this type of support was al-Khawarizmi. A notable feature of many scholars working under Muslim rule in medieval times is that they were often polymaths. Examples include the work on optics, maths and astronomy of Ibn al-Haytham. The Renaissance brought an increased emphasis on mathematics and science to Europe. During this period of transition from a mainly feudal and ecclesiastical culture to a predominantly secular one, many notable mathematicians had other occupations: Luca Pacioli (founder of accounting); Niccolò Fontana Tartaglia (notable engineer and bookkeeper); Gerolamo Cardano (earliest founder of probability and binomial expansion); Robert Recorde (physician) and François Viète (lawyer). As time passed, many mathematicians gravitated towards universities. An emphasis on free thinking and experimentation had begun in Britain's oldest universities beginning in the seventeenth century at Oxford with the scientists Robert Hooke and Robert Boyle, and at Cambridge where Isaac Newton was Lucasian Professor of Mathematics & Physics. Moving into the 19th century, the objective of universities all across Europe evolved from teaching the "regurgitation of knowledge" to "encourag[ing] productive thinking." In 1810, Humboldt convinced the King of Prussia to build a university in Berlin based on Friedrich Schleiermacher's liberal ideas; the goal was to demonstrate the process of the discovery of knowledge and to teach students to "take account of fundamental laws of science in all their thinking." Thus, seminars and laboratories started to evolve. British universities of this period adopted some approaches familiar to the Italian and German universities, but as they already enjoyed substantial freedoms and autonomy the changes there had begun with the Age of Enlightenment, the same influences that inspired Humboldt. The Universities of Oxford and Cambridge emphasized the importance of research, arguably more authentically implementing Humboldt's idea of a university than even German universities, which were subject to state authority. Overall, science (including mathematics) became the focus of universities in the 19th and 20th centuries. Students could conduct research in seminars or laboratories and began to produce doctoral theses with more scientific content. According to Humboldt, the mission of the University of Berlin was to pursue scientific knowledge. The German university system fostered professional, bureaucratically regulated scientific research performed in well-equipped laboratories, instead of the kind of research done by private and individual scholars in Great Britain and France. In fact, Rüegg asserts that the German system is responsible for the development of the modern research university because it focused on the idea of "freedom of scientific research, teaching and study." Required education Mathematicians usually cover a breadth of topics within mathematics in their undergraduate education, and then proceed |
and thread-based microfluidics. Disadvantages to open systems include susceptibility to evaporation, contamination, and limited flow rate. Continuous-flow microfluidics Continuous flow microfluidics rely on the control of a steady state liquid flow through narrow channels or porous media predominantly by accelerating or hindering fluid flow in capillary elements. In paper based microfluidics, capillary elements can be achieved through the simple variation of section geometry. In general, the actuation of liquid flow is implemented either by external pressure sources, external mechanical pumps, integrated mechanical micropumps, or by combinations of capillary forces and electrokinetic mechanisms. Continuous-flow microfluidic operation is the mainstream approach because it is easy to implement and less sensitive to protein fouling problems. Continuous-flow devices are adequate for many well-defined and simple biochemical applications, and for certain tasks such as chemical separation, but they are less suitable for tasks requiring a high degree of flexibility or fluid manipulations. These closed-channel systems are inherently difficult to integrate and scale because the parameters that govern flow field vary along the flow path making the fluid flow at any one location dependent on the properties of the entire system. Permanently etched microstructures also lead to limited reconfigurability and poor fault tolerance capability. Computer-aided design automation approaches for continuous-flow microfluidics have been proposed in recent years to alleviate the design effort and to solve the scalability problems. Process monitoring capabilities in continuous-flow systems can be achieved with highly sensitive microfluidic flow sensors based on MEMS technology, which offers resolutions down to the nanoliter range. Droplet-based microfluidics Droplet-based microfluidics is a subcategory of microfluidics in contrast with continuous microfluidics; droplet-based microfluidics manipulates discrete volumes of fluids in immiscible phases with low Reynolds number and laminar flow regimes. Interest in droplet-based microfluidics systems has been growing substantially in past decades. Microdroplets allow for handling miniature volumes (μl to fl) of fluids conveniently, provide better mixing, encapsulation, sorting, and sensing, and suit high throughput experiments. Exploiting the benefits of droplet-based microfluidics efficiently requires a deep understanding of droplet generation to perform various logical operations such as droplet manipulation, droplet sorting, droplet merging, and droplet breakup. Digital microfluidics Alternatives to the above closed-channel continuous-flow systems include novel open structures, where discrete, independently controllable droplets are manipulated on a substrate using electrowetting. Following the analogy of digital microelectronics, this approach is referred to as digital microfluidics. Le Pesant et al. pioneered the use of electrocapillary forces to move droplets on a digital track. The "fluid transistor" pioneered by Cytonix also played a role. The technology was subsequently commercialised by Duke University. By using discrete unit-volume droplets, a microfluidic function can be reduced to a set of repeated basic operations, i.e., moving one unit of fluid over one unit of distance. This "digitisation" method facilitates the use of a hierarchical and cell-based approach for microfluidic biochip design. Therefore, digital microfluidics offers a flexible and scalable system architecture as well as high fault-tolerance capability. Moreover, because each droplet can be controlled independently, these systems also have dynamic reconfigurability, whereby groups of unit cells in a microfluidic array can be reconfigured to change their functionality during the concurrent execution of a set of bioassays. Although droplets are manipulated in confined microfluidic channels, since the control on droplets is not independent, it should not be confused as "digital microfluidics". One common actuation method for digital microfluidics is electrowetting-on-dielectric (EWOD). Many lab-on-a-chip applications have been demonstrated within the digital microfluidics paradigm using electrowetting. However, recently other techniques for droplet manipulation have also been demonstrated using magnetic force, surface acoustic waves, optoelectrowetting, mechanical actuation, etc. Paper-based microfluidics Paper-based microfluidic devices fill a growing niche for portable, cheap, and user-friendly medical diagnostic systems. Paper based microfluidics rely on the phenomenon of capillary penetration in porous media. To tune fluid penetration in porous substrates such as paper in two and three dimensions, the pore structure, wettability and geometry of the microfluidic devices can be controlled while the viscosity and evaporation rate of the liquid play a further significant role. Many such devices feature hydrophobic barriers on hydrophilic paper that passively transport aqueous solutions to outlets where biological reactions take place. Current applications include portable glucose detection and environmental testing, with hopes of reaching areas that lack advanced medical diagnostic tools. Particle detection microfluidics One application area that has seen significant academic effort and some commercial effort is in the area of particle detection in fluids. Particle detection of small fluid-borne particles down to about 1 μm in diameter is typically done using a Coulter counter, in which electrical signals are generated when a weakly-conducting fluid such as in saline water is passed through a small (~100 μm diameter) pore, so that an electrical signal is generated that is directly proportional to the ratio of the particle volume to the pore volume. The physics behind this is relatively simple, described in a classic paper by DeBlois and Bean, and the implementation first described in Coulter's original patent. This is the method used to e.g. size and count erythrocytes (red blood cells [wiki]) as well as leukocytes (white blood cells) for standard blood analysis. The generic term for this method is resistive pulse sensing (RPS); Coulter counting is a trademark term. However, the RPS method does not work well for particles below 1 μm diameter, as the signal-to-noise ratio falls below the reliably detectable limit, set mostly by the size of the pore in which the analyte passes and the input noise of the first-stage amplifier. The limit on the pore size in traditional RPS Coulter counters is set by the method used to make the pores, which while a trade secret, most likely uses traditional mechanical methods. This is where microfluidics can have an impact: The lithography-based production of microfluidic devices, or more likely the production of reusable molds for making microfluidic devices using a molding process, is limited to sizes much smaller than traditional machining. Critical dimensions down to 1 μm are easily fabricated, and with a bit more effort and expense, feature sizes below 100 nm can be patterned reliably as well. This enables the inexpensive production of pores integrated in a microfluidic circuit where the pore diameters can reach sizes of order 100 nm, with a concomitant reduction in the minimum particle diameters by several orders of magnitude. As a result there has been some university-based development of microfluidic particle counting and sizing with the accompanying commercialization of this technology. This method has been termed microfluidic resistive pulse sensing (MRPS). Microfluidic-assisted magnetophoresis One major area of application for microfluidic devices is the separation and sorting of different fluids or cell types. Recent developments in the microfluidics field have seen the integration of microfluidic devices with magnetophoresis: the migration of particles by a magnetic field. This can be accomplished by sending a fluid containing at least one magnetic component through a microfluidic channel that has a magnet positioned along the length of the channel. This creates a magnetic field inside the microfluidic channel which draws magnetically active substances towards it, effectively separating the magnetic and non-magnetic components of the fluid. This technique can be readily utilized in industrial settings where the fluid at hand already contains magnetically active material. For example, a handful of metallic impurities can find their way into certain consumable liquids, namely milk and other dairy products. Conveniently, in the case of milk, many of these metal contaminants exhibit paramagnetism. Therefore, before packaging, milk can be flowed through channels with magnetic gradients as a means of purifying out the metal contaminants. Other, more research-oriented applications of microfluidic-assisted magnetophoresis are numerous and are generally targeted towards cell separation. The general way this is accomplished involves several steps. First, a paramagnetic substance (usually micro/nanoparticles or a paramagnetic fluid) needs to be functionalized to target the cell type of interest. This can be accomplished by identifying a transmembranal protein unique to the cell type of interest and subsequently functionalizing magnetic particles with the complementary antigen or antibody. Once the magnetic particles are functionalized, they are dispersed in a cell mixture where they bind to only the cells of interest. The resulting cell/particle mixture can then be flowed through a microfluidic device with a magnetic field to separate the targeted cells from the rest. Conversely, microfluidic-assisted magnetophoresis may be used to facilitate efficient mixing within microdroplets or plugs. To accomplish this, microdroplets are injected with paramagnetic nanoparticles and are flowed through a straight channel which passes through rapidly alternating magnetic fields. This causes the magnetic particles to be quickly pushed from side to side within the droplet and results in the mixing of the microdroplet contents. This eliminates the need for tedious engineering considerations that are necessary for traditional, channel-based droplet mixing. Other research has also shown that the label-free separation of cells may be possible by suspending cells in a paramagnetic fluid and taking advantage of the magneto-Archimedes effect. While this does eliminate the complexity of particle functionalization, more research is needed to fully understand the magneto-Archimedes phenomenon and how it can be used to this end. This is not an exhaustive list of the various applications of microfluidic-assisted magnetophoresis; the above examples merely highlight the versatility of this separation technique in both current and future applications. Key application areas Microfluidic structures include micropneumatic systems, i.e. microsystems for the handling of off-chip fluids (liquid pumps, gas valves, etc.), and microfluidic structures for the on-chip handling of nanoliter (nl) and picoliter (pl) volumes. To date, the most successful commercial application of microfluidics is the inkjet printhead. Additionally, microfluidic manufacturing advances mean that makers can produce the devices in low-cost plastics and automatically verify part quality. Advances in microfluidics technology are revolutionizing molecular biology procedures for enzymatic analysis (e.g., glucose and lactate assays), DNA analysis (e.g., polymerase chain reaction and high-throughput sequencing), proteomics, and in chemical synthesis. The basic idea of microfluidic biochips is to integrate assay operations such as detection, as well as sample pre-treatment and sample preparation on one chip. An emerging application area for biochips is clinical pathology, especially the immediate point-of-care diagnosis of diseases. In addition, microfluidics-based devices, capable of continuous sampling and real-time testing of air/water samples for biochemical toxins and other dangerous pathogens, can serve as an always-on "bio-smoke alarm" for early warning. Microfluidic technology has led to the creation of powerful tools for biologists to control the complete cellular environment, leading to new questions and discoveries. Many diverse advantages of this technology for microbiology are listed below: General single cell studies including growth Cellular aging: microfluidic devices such as the "mother machine" allow tracking of thousands of individual cells for many generations until they die. Microenvironmental control: ranging from mechanical environment to chemical environment Precise spatiotemporal concentration gradients by incorporating multiple chemical inputs to a single device Force measurements of adherent cells or confined chromosomes: objects trapped in a microfluidic device can be directly manipulated using optical tweezers or other force-generating methods Confining cells and exerting controlled forces by coupling with external force-generation methods such as Stokes flow, optical tweezer, or controlled deformation of the PDMS (Polydimethylsiloxane) device Electric field integration Plant on a chip and plant tissue culture Antibiotic resistance: microfluidic devices can be used as heterogeneous environments for microorganisms. In a heterogeneous environment, it is easier for a microorganism to evolve. This can be useful for testing the acceleration of evolution of a microorganism / for testing the development of antibiotic resistance. Some of these areas are further elaborated in the sections below: DNA chips (microarrays) Early biochips were based on the idea of a DNA microarray, e.g., the GeneChip DNAarray from Affymetrix, which is a piece of glass, plastic or silicon substrate, on which pieces of DNA (probes) are affixed in a microscopic array. Similar to a DNA microarray, a protein array is a miniature array where a multitude of different capture agents, most frequently monoclonal antibodies, are deposited on a chip surface; they are used to determine the presence and/or amount of proteins in biological samples, e.g., blood. A drawback of DNA and protein arrays is that they are neither reconfigurable nor scalable after manufacture. Digital microfluidics has been described as a means for carrying out Digital PCR. Molecular biology In addition to microarrays, biochips have been designed for two-dimensional electrophoresis, transcriptome analysis, and PCR amplification. Other applications include various electrophoresis and liquid chromatography applications for proteins and DNA, cell separation, in particular, blood cell separation, protein analysis, cell manipulation and analysis including cell viability analysis and microorganism capturing. Evolutionary biology By combining microfluidics with landscape ecology and nanofluidics, a nano/micro fabricated fluidic landscape can be constructed by building local patches of bacterial habitat and connecting them by dispersal corridors. The resulting landscapes can be used as physical implementations of an adaptive landscape, by generating a spatial mosaic of patches of opportunity distributed in space and time. The patchy nature of these fluidic landscapes allows for the study of adapting bacterial cells in a metapopulation system. The evolutionary ecology of these bacterial systems in these synthetic ecosystems allows for using biophysics to address questions in evolutionary biology. Cell behavior The ability to create precise and carefully controlled chemoattractant gradients makes microfluidics the ideal tool to study motility, chemotaxis and the ability to evolve / develop resistance to antibiotics in small populations of microorganisms and in a short period of time. These microorganisms including bacteria and the broad range of organisms that form the marine microbial loop, responsible for regulating much of the oceans' biogeochemistry. Microfluidics has also greatly aided the study of durotaxis by facilitating the creation of durotactic (stiffness) gradients. Cellular biophysics By rectifying the motion of individual swimming bacteria, microfluidic structures can be used to extract mechanical motion from a population | in fluids. Particle detection of small fluid-borne particles down to about 1 μm in diameter is typically done using a Coulter counter, in which electrical signals are generated when a weakly-conducting fluid such as in saline water is passed through a small (~100 μm diameter) pore, so that an electrical signal is generated that is directly proportional to the ratio of the particle volume to the pore volume. The physics behind this is relatively simple, described in a classic paper by DeBlois and Bean, and the implementation first described in Coulter's original patent. This is the method used to e.g. size and count erythrocytes (red blood cells [wiki]) as well as leukocytes (white blood cells) for standard blood analysis. The generic term for this method is resistive pulse sensing (RPS); Coulter counting is a trademark term. However, the RPS method does not work well for particles below 1 μm diameter, as the signal-to-noise ratio falls below the reliably detectable limit, set mostly by the size of the pore in which the analyte passes and the input noise of the first-stage amplifier. The limit on the pore size in traditional RPS Coulter counters is set by the method used to make the pores, which while a trade secret, most likely uses traditional mechanical methods. This is where microfluidics can have an impact: The lithography-based production of microfluidic devices, or more likely the production of reusable molds for making microfluidic devices using a molding process, is limited to sizes much smaller than traditional machining. Critical dimensions down to 1 μm are easily fabricated, and with a bit more effort and expense, feature sizes below 100 nm can be patterned reliably as well. This enables the inexpensive production of pores integrated in a microfluidic circuit where the pore diameters can reach sizes of order 100 nm, with a concomitant reduction in the minimum particle diameters by several orders of magnitude. As a result there has been some university-based development of microfluidic particle counting and sizing with the accompanying commercialization of this technology. This method has been termed microfluidic resistive pulse sensing (MRPS). Microfluidic-assisted magnetophoresis One major area of application for microfluidic devices is the separation and sorting of different fluids or cell types. Recent developments in the microfluidics field have seen the integration of microfluidic devices with magnetophoresis: the migration of particles by a magnetic field. This can be accomplished by sending a fluid containing at least one magnetic component through a microfluidic channel that has a magnet positioned along the length of the channel. This creates a magnetic field inside the microfluidic channel which draws magnetically active substances towards it, effectively separating the magnetic and non-magnetic components of the fluid. This technique can be readily utilized in industrial settings where the fluid at hand already contains magnetically active material. For example, a handful of metallic impurities can find their way into certain consumable liquids, namely milk and other dairy products. Conveniently, in the case of milk, many of these metal contaminants exhibit paramagnetism. Therefore, before packaging, milk can be flowed through channels with magnetic gradients as a means of purifying out the metal contaminants. Other, more research-oriented applications of microfluidic-assisted magnetophoresis are numerous and are generally targeted towards cell separation. The general way this is accomplished involves several steps. First, a paramagnetic substance (usually micro/nanoparticles or a paramagnetic fluid) needs to be functionalized to target the cell type of interest. This can be accomplished by identifying a transmembranal protein unique to the cell type of interest and subsequently functionalizing magnetic particles with the complementary antigen or antibody. Once the magnetic particles are functionalized, they are dispersed in a cell mixture where they bind to only the cells of interest. The resulting cell/particle mixture can then be flowed through a microfluidic device with a magnetic field to separate the targeted cells from the rest. Conversely, microfluidic-assisted magnetophoresis may be used to facilitate efficient mixing within microdroplets or plugs. To accomplish this, microdroplets are injected with paramagnetic nanoparticles and are flowed through a straight channel which passes through rapidly alternating magnetic fields. This causes the magnetic particles to be quickly pushed from side to side within the droplet and results in the mixing of the microdroplet contents. This eliminates the need for tedious engineering considerations that are necessary for traditional, channel-based droplet mixing. Other research has also shown that the label-free separation of cells may be possible by suspending cells in a paramagnetic fluid and taking advantage of the magneto-Archimedes effect. While this does eliminate the complexity of particle functionalization, more research is needed to fully understand the magneto-Archimedes phenomenon and how it can be used to this end. This is not an exhaustive list of the various applications of microfluidic-assisted magnetophoresis; the above examples merely highlight the versatility of this separation technique in both current and future applications. Key application areas Microfluidic structures include micropneumatic systems, i.e. microsystems for the handling of off-chip fluids (liquid pumps, gas valves, etc.), and microfluidic structures for the on-chip handling of nanoliter (nl) and picoliter (pl) volumes. To date, the most successful commercial application of microfluidics is the inkjet printhead. Additionally, microfluidic manufacturing advances mean that makers can produce the devices in low-cost plastics and automatically verify part quality. Advances in microfluidics technology are revolutionizing molecular biology procedures for enzymatic analysis (e.g., glucose and lactate assays), DNA analysis (e.g., polymerase chain reaction and high-throughput sequencing), proteomics, and in chemical synthesis. The basic idea of microfluidic biochips is to integrate assay operations such as detection, as well as sample pre-treatment and sample preparation on one chip. An emerging application area for biochips is clinical pathology, especially the immediate point-of-care diagnosis of diseases. In addition, microfluidics-based devices, capable of continuous sampling and real-time testing of air/water samples for biochemical toxins and other dangerous pathogens, can serve as an always-on "bio-smoke alarm" for early warning. Microfluidic technology has led to the creation of powerful tools for biologists to control the complete cellular environment, leading to new questions and discoveries. Many diverse advantages of this technology for microbiology are listed below: General single cell studies including growth Cellular aging: microfluidic devices such as the "mother machine" allow tracking of thousands of individual cells for many generations until they die. Microenvironmental control: ranging from mechanical environment to chemical environment Precise spatiotemporal concentration gradients by incorporating multiple chemical inputs to a single device Force measurements of adherent cells or confined chromosomes: objects trapped in a microfluidic device can be directly manipulated using optical tweezers or other force-generating methods Confining cells and exerting controlled forces by coupling with external force-generation methods such as Stokes flow, optical tweezer, or controlled deformation of the PDMS (Polydimethylsiloxane) device Electric field integration Plant on a chip and plant tissue culture Antibiotic resistance: microfluidic devices can be used as heterogeneous environments for microorganisms. In a heterogeneous |
prime, then ) is a perfect number. In the 18th century, Leonhard Euler proved that, conversely, all even perfect numbers have this form. This is known as the Euclid–Euler theorem. It is unknown whether there are any odd perfect numbers. History Mersenne primes take their name from the 17th-century French scholar Marin Mersenne, who compiled what was supposed to be a list of Mersenne primes with exponents up to 257. The exponents listed by Mersenne were as follows: 2, 3, 5, 7, 13, 17, 19, 31, 67, 127, 257. His list replicated the known primes of his time with exponents up to 19. His next entry, 31, was correct, but the list then became largely incorrect, as Mersenne mistakenly included and (which are composite) and omitted , , and (which are prime). Mersenne gave little indication of how he came up with his list. Édouard Lucas proved in 1876 that is indeed prime, as Mersenne claimed. This was the largest known prime number for 75 years until 1951, when Ferrier found a larger prime, , using a desk calculating machine. was determined to be prime in 1883 by Ivan Mikheevich Pervushin, though Mersenne claimed it was composite, and for this reason it is sometimes called Pervushin's number. This was the second-largest known prime number, and it remained so until 1911. Lucas had shown another error in Mersenne's list in 1876. Without finding a factor, Lucas demonstrated that is actually composite. No factor was found until a famous talk by Frank Nelson Cole in 1903. Without speaking a word, he went to a blackboard and raised 2 to the 67th power, then subtracted one. On the other side of the board, he multiplied and got the same number, then returned to his seat (to applause) without speaking. He later said that the result had taken him "three years of Sundays" to find. A correct list of all Mersenne primes in this number range was completed and rigorously verified only about three centuries after Mersenne published his list. Searching for Mersenne primes Fast algorithms for finding Mersenne primes are available, and , the eight largest known prime numbers are Mersenne primes. The first four Mersenne primes , , and were known in antiquity. The fifth, , was discovered anonymously before 1461; the next two ( and ) were found by Pietro Cataldi in 1588. After nearly two centuries, was verified to be prime by Leonhard Euler in 1772. The next (in historical, not numerical order) was , found by Édouard Lucas in 1876, then by Ivan Mikheevich Pervushin in 1883. Two more ( and ) were found early in the 20th century, by R. E. Powers in 1911 and 1914, respectively. The most efficient method presently known for testing the primality of Mersenne numbers is the Lucas–Lehmer primality test. Specifically, it can be shown that for prime , is prime if and only if divides , where and for . During the era of manual calculation, all the exponents up to and including 257 were tested with the Lucas–Lehmer test and found to be composite. A notable contribution was made by retired Yale physics professor Horace Scudder Uhler, who did the calculations for exponents 157, 167, 193, 199, 227, and 229. Unfortunately for those investigators, the interval they were testing contains the largest known relative gap between Mersenne primes: the next Mersenne prime exponent, 521, would turn out to be more than four times larger than the previous record of 127. The search for Mersenne primes was revolutionized by the introduction of the electronic digital computer. Alan Turing searched for them on the Manchester Mark 1 in 1949, but the first successful identification of a Mersenne prime, , by this means was achieved at 10:00 pm on January 30, 1952, using the U.S. National Bureau of Standards Western Automatic Computer (SWAC) at the Institute for Numerical Analysis at the University of California, Los Angeles, under the direction of D. H. Lehmer, with a computer search program written and run by Prof. R. M. Robinson. It was the first Mersenne prime to be identified in thirty-eight years; the next one, , was found by the computer a little less than two hours later. Three more — , , and — were found by the same program in the next several months. was the first discovered titanic prime, was the first discovered gigantic prime, and was the first megaprime to be discovered, being a prime with at least 1,000,000 digits. The number of digits in the decimal representation of equals , where denotes the floor function (or equivalently ). In September 2008, mathematicians at UCLA participating in the Great Internet Mersenne Prime Search (GIMPS) won part of a $100,000 prize from the Electronic Frontier Foundation for their discovery of a very nearly 13-million-digit Mersenne prime. The prize, finally confirmed in October 2009, is for the first known prime with at least 10 million digits. The prime was found on a Dell OptiPlex 745 on August 23, 2008. This was the eighth Mersenne prime discovered at UCLA. On April 12, 2009, a GIMPS server log reported that a 47th Mersenne prime had possibly been found. The find was first noticed on June 4, 2009, and verified a week later. The prime is . Although it is chronologically the 47th Mersenne prime to be discovered, it is smaller than the largest known at the time, which was the 45th to be discovered. On January 25, 2013, Curtis Cooper, a mathematician at the University of Central Missouri, discovered a 48th Mersenne prime, (a number with 17,425,170 digits), as a result of a search executed by a GIMPS server network. On January 19, 2016, Cooper published his discovery of a 49th Mersenne prime, (a number with 22,338,618 digits), as a result of a search executed by a GIMPS server network. This was the fourth Mersenne prime discovered by Cooper and his team in the past ten years. On September 2, 2016, the Great Internet Mersenne Prime Search finished verifying all tests below M37,156,667, thus officially confirming its position as the 45th Mersenne prime. On January 3, 2018, it was announced that Jonathan Pace, a 51-year-old electrical engineer living in Germantown, Tennessee, had found a 50th Mersenne prime, (a number with 23,249,425 digits), as a result of a search executed by a GIMPS server network. The discovery was made by a computer in the offices of a church in the same town. On December 21, 2018, it was announced that The Great Internet Mersenne Prime Search (GIMPS) discovered the largest known prime number, , having 24,862,048 digits. A computer volunteered by Patrick Laroche from Ocala, Florida made the find on December 7, 2018. In late 2020, GIMPS began using a new technique to rule out potential Mersenne primes called the Probable prime (PRP) test, based on development from Robert Gerbicz in 2017, and a simple way to verify tests developed by Krzysztof Pietrzak in 2018. Due to the low error rate and ease of proof, this nearly halved the computing time to rule out potential primes over the Lucas-Lehmer test (as two users would no longer have to perform the same test to confirm the other's result), although exponents passing the PRP test still require one to confirm their primality. Theorems about Mersenne numbers If and are natural numbers such that is prime, then or . Proof: . Then , so . Thus . However, is prime, so or . In the former case, , hence (which is a contradiction, as neither −1 nor 0 is prime) or In the latter case, or . If , however, which is not prime. Therefore, . If is prime, then is prime. Proof: Suppose that is composite, hence can be written with and . Then so is composite. By contrapositive, if is prime then p is prime. If is an odd prime, then every prime that divides must be 1 plus a multiple of . This holds even when is prime. For example, is prime, and . A composite example is , where and . Proof: By Fermat's little theorem, is a factor of . Since is a factor of , for all positive integers , is also a factor of . Since is prime and is not a factor of , is also the smallest positive integer such that is a factor of . As a result, for all positive integers , is a factor of if and only if is a factor of . Therefore, since is a factor of , is a factor of so . Furthermore, since is a factor of , which is odd, is odd. Therefore, . This fact leads to a proof of Euclid's theorem, which asserts the infinitude of primes, distinct from the proof written by Euclid: for every odd prime , all primes dividing are larger than ; thus there are always larger primes than any particular prime. It follows from this fact that for every prime , there is at least one prime of the form less than or equal to , for some integer . If is | next one, , was found by the computer a little less than two hours later. Three more — , , and — were found by the same program in the next several months. was the first discovered titanic prime, was the first discovered gigantic prime, and was the first megaprime to be discovered, being a prime with at least 1,000,000 digits. The number of digits in the decimal representation of equals , where denotes the floor function (or equivalently ). In September 2008, mathematicians at UCLA participating in the Great Internet Mersenne Prime Search (GIMPS) won part of a $100,000 prize from the Electronic Frontier Foundation for their discovery of a very nearly 13-million-digit Mersenne prime. The prize, finally confirmed in October 2009, is for the first known prime with at least 10 million digits. The prime was found on a Dell OptiPlex 745 on August 23, 2008. This was the eighth Mersenne prime discovered at UCLA. On April 12, 2009, a GIMPS server log reported that a 47th Mersenne prime had possibly been found. The find was first noticed on June 4, 2009, and verified a week later. The prime is . Although it is chronologically the 47th Mersenne prime to be discovered, it is smaller than the largest known at the time, which was the 45th to be discovered. On January 25, 2013, Curtis Cooper, a mathematician at the University of Central Missouri, discovered a 48th Mersenne prime, (a number with 17,425,170 digits), as a result of a search executed by a GIMPS server network. On January 19, 2016, Cooper published his discovery of a 49th Mersenne prime, (a number with 22,338,618 digits), as a result of a search executed by a GIMPS server network. This was the fourth Mersenne prime discovered by Cooper and his team in the past ten years. On September 2, 2016, the Great Internet Mersenne Prime Search finished verifying all tests below M37,156,667, thus officially confirming its position as the 45th Mersenne prime. On January 3, 2018, it was announced that Jonathan Pace, a 51-year-old electrical engineer living in Germantown, Tennessee, had found a 50th Mersenne prime, (a number with 23,249,425 digits), as a result of a search executed by a GIMPS server network. The discovery was made by a computer in the offices of a church in the same town. On December 21, 2018, it was announced that The Great Internet Mersenne Prime Search (GIMPS) discovered the largest known prime number, , having 24,862,048 digits. A computer volunteered by Patrick Laroche from Ocala, Florida made the find on December 7, 2018. In late 2020, GIMPS began using a new technique to rule out potential Mersenne primes called the Probable prime (PRP) test, based on development from Robert Gerbicz in 2017, and a simple way to verify tests developed by Krzysztof Pietrzak in 2018. Due to the low error rate and ease of proof, this nearly halved the computing time to rule out potential primes over the Lucas-Lehmer test (as two users would no longer have to perform the same test to confirm the other's result), although exponents passing the PRP test still require one to confirm their primality. Theorems about Mersenne numbers If and are natural numbers such that is prime, then or . Proof: . Then , so . Thus . However, is prime, so or . In the former case, , hence (which is a contradiction, as neither −1 nor 0 is prime) or In the latter case, or . If , however, which is not prime. Therefore, . If is prime, then is prime. Proof: Suppose that is composite, hence can be written with and . Then so is composite. By contrapositive, if is prime then p is prime. If is an odd prime, then every prime that divides must be 1 plus a multiple of . This holds even when is prime. For example, is prime, and . A composite example is , where and . Proof: By Fermat's little theorem, is a factor of . Since is a factor of , for all positive integers , is also a factor of . Since is prime and is not a factor of , is also the smallest positive integer such that is a factor of . As a result, for all positive integers , is a factor of if and only if is a factor of . Therefore, since is a factor of , is a factor of so . Furthermore, since is a factor of , which is odd, is odd. Therefore, . This fact leads to a proof of Euclid's theorem, which asserts the infinitude of primes, distinct from the proof written by Euclid: for every odd prime , all primes dividing are larger than ; thus there are always larger primes than any particular prime. It follows from this fact that for every prime , there is at least one prime of the form less than or equal to , for some integer . If is an odd prime, then every prime that divides is congruent to . Proof: , so is a square root of . By quadratic reciprocity, every prime modulus in which the number 2 has a square root is congruent to . A Mersenne prime cannot be a Wieferich prime. Proof: We show if is a Mersenne prime, then the congruence does not hold. By Fermat's little theorem, . Therefore, one can write . If the given congruence is satisfied, then , therefore . Hence , and therefore . This leads to , which is impossible since . If and are natural numbers then and are coprime if and only if and are coprime. Consequently, a prime number divides at most one prime-exponent Mersenne number. That is, the set of pernicious Mersenne numbers is pairwise coprime. If and are both prime (meaning that is a Sophie Germain prime), and is congruent to , then divides . Example: 11 and 23 are both prime, and , so 23 divides . Proof: Let be . By Fermat's little theorem, , so either or . Supposing latter true, then , so −2 would be a quadratic residue mod . However, since is congruent to , is congruent to and therefore 2 is a quadratic residue mod . Also since is congruent to , −1 is a quadratic nonresidue mod , so −2 is the product of a residue and a nonresidue and hence it is a nonresidue, which is a contradiction. Hence, the former congruence must be true and divides . All composite divisors of prime-exponent Mersenne numbers are strong pseudoprimes to the base 2. With the exception of 1, a Mersenne number cannot be a perfect power. That is, and in accordance with Mihăilescu's theorem, the equation has no solutions where , , and are integers with and . List of known Mersenne primes , the 51 known Mersenne primes are 2p − 1 for the following p: 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609, 57885161, 74207281, 77232917, 82589933. Factorization of composite Mersenne numbers Since they are prime numbers, Mersenne primes are divisible only by 1 and themselves. However, not all Mersenne numbers are Mersenne primes. Mersenne numbers are very good test cases for the special number field sieve algorithm, so often the largest number factorized with this algorithm has been a Mersenne number. , is the record-holder, having been factored with a variant of the special number field sieve that allows the factorization of several numbers at once. See integer factorization records for links to more information. The special number field sieve can factorize numbers with more than one large factor. If a number has only one very large factor then other algorithms can factorize larger numbers by first finding small factors and then running a primality test on the cofactor. , the largest factorization with probable prime factors allowed is , where is a 3,143,811-digit probable prime. It was discovered by a GIMPS participant with nickname "fre_games". , the Mersenne number M1277 is the smallest composite Mersenne number with no known factors; it has no prime factors below 268. The table below shows factorizations for the first 20 composite Mersenne numbers . The number of factors for the first 500 Mersenne numbers can be found at . Mersenne numbers |
in Utah born from now-defunct Magcorp. Pidgeon process China is almost completely reliant on the silicothermic Pidgeon process (the reduction of the oxide at high temperatures with silicon, often provided by a ferrosilicon alloy in which the iron is but a spectator in the reactions) to obtain the metal. The process can also be carried out with carbon at approx 2300 °C: + + → + + → + Dow process In the United States, magnesium is obtained principally with the Dow process, by electrolysis of fused magnesium chloride from brine and sea water. A saline solution containing ions is first treated with lime (calcium oxide) and the precipitated magnesium hydroxide is collected: + + → + The hydroxide is then converted to a partial hydrate of magnesium chloride by treating the hydroxide with hydrochloric acid and heating of the product: + 2 HCl → + 2 The salt is then electrolyzed in the molten state. At the cathode, the ion is reduced by two electrons to magnesium metal: + 2 → Mg At the anode, each pair of ions is oxidized to chlorine gas, releasing two electrons to complete the circuit: 2 → (g) + 2 YSZ process A new process, solid oxide membrane technology, involves the electrolytic reduction of MgO. At the cathode, ion is reduced by two electrons to magnesium metal. The electrolyte is yttria-stabilized zirconia (YSZ). The anode is a liquid metal. At the YSZ/liquid metal anode is oxidized. A layer of graphite borders the liquid metal anode, and at this interface carbon and oxygen react to form carbon monoxide. When silver is used as the liquid metal anode, there is no reductant carbon or hydrogen needed, and only oxygen gas is evolved at the anode. It has been reported that this method provides a 40% reduction in cost per pound over the electrolytic reduction method. History The name magnesium originates from the Greek word for locations related to the tribe of the Magnetes, either a district in Thessaly called Magnesia or Magnesia ad Sipylum, now in Turkey. It is related to magnetite and manganese, which also originated from this area, and required differentiation as separate substances. See manganese for this history. In 1618, a farmer at Epsom in England attempted to give his cows water from a well there. The cows refused to drink because of the water's bitter taste, but the farmer noticed that the water seemed to heal scratches and rashes. The substance became known as Epsom salts and its fame spread. It was eventually recognized as hydrated magnesium sulfate, ·7. The metal itself was first isolated by Sir Humphry Davy in England in 1808. He used electrolysis on a mixture of magnesia and mercuric oxide. Antoine Bussy prepared it in coherent form in 1831. Davy's first suggestion for a name was magnium, but the name magnesium is now used. Uses as a metal Magnesium is the third-most-commonly-used structural metal, following iron and aluminium. The main applications of magnesium are, in order: aluminium alloys, die-casting (alloyed with zinc), removing sulfur in the production of iron and steel, and the production of titanium in the Kroll process. Magnesium is used in lightweight materials and alloys. For example, when infused with silicon carbide nanoparticles, it has extremely high specific strength. Historically, magnesium was one of the main aerospace construction metals and was used for German military aircraft as early as World War I and extensively for German aircraft in World War II. The Germans coined the name "Elektron" for magnesium alloy, a term which is still used today. In the commercial aerospace industry, magnesium was generally restricted to engine-related components, due to fire and corrosion hazards. Magnesium alloy use in aerospace is increasing in the 21st century, driven by the importance of fuel economy. Development and testing of new magnesium alloys continues, notably Elektron 21, which (in test) has proved suitable for aerospace engine, internal, and airframe components. The European Community runs three R&D magnesium projects in the Aerospace priority of the FP6 Program. Recent developments in metallurgy and manufacturing have allowed for the potential for magnesium alloys to act as replacements for aluminium and steel alloys in certain applications. In the form of thin ribbons, magnesium is used to purify solvents; for example, preparing super-dry ethanol. Aircraft Wright Aeronautical used a magnesium crankcase in the WWII-era Wright R-3350 Duplex Cyclone aviation engine. This presented a serious problem for the earliest models of the Boeing B-29 Superfortress heavy bomber when an in-flight engine fire ignited the engine crankcase. The resulting combustion was as hot as 5,600 °F (3,100 °C) and could sever the wing spar from the fuselage. Automotive Mercedes-Benz used the alloy Elektron in the bodywork of an early model Mercedes-Benz 300 SLR; these cars competed in the 1955 World Sportscar Championship including a win at the Mille Miglia, and at Le Mans where one was involved in the 1955 Le Mans disaster when spectators were showered with burning fragments of elektron. Porsche used magnesium alloy frames in the 917/053 that won Le Mans in 1971, and continues to use magnesium alloys for its engine blocks due to the weight advantage. Volkswagen Group has used magnesium in its engine components for many years. Mitsubishi Motors uses magnesium for its paddle shifters. BMW used magnesium alloy blocks in their N52 engine, including an aluminium alloy insert for the cylinder walls and cooling jackets surrounded by a high-temperature magnesium alloy AJ62A. The engine was used worldwide between 2005 and 2011 in various 1, 3, 5, 6, and 7 series models; as well as the Z4, X1, X3, and X5. Chevrolet used the magnesium alloy AE44 in the 2006 Corvette Z06. Both AJ62A and AE44 are recent developments in high-temperature low-creep magnesium alloys. The general strategy for such alloys is to form intermetallic precipitates at the grain boundaries, for example by adding mischmetal or calcium. New alloy development and lower costs that make magnesium competitive with aluminium will increase the number of automotive applications. Electronics Because of low density and good mechanical and electrical properties, magnesium is used for manufacturing of mobile phones, laptop and tablet computers, cameras, and other electronic components. It was used as a premium feature because of its light weight in some 2020 laptops. Other Magnesium, being readily available and relatively nontoxic, has a variety of uses: Magnesium is flammable, burning at a temperature of approximately , and the autoignition temperature of magnesium ribbon is approximately . It produces intense, bright, white light when it burns. Magnesium's high combustion temperature makes it a useful tool for starting emergency fires. Other uses include flash photography, flares, pyrotechnics, fireworks sparklers, and trick birthday candles. Magnesium is also often used to ignite thermite or other materials that require a high ignition temperature. In the form of turnings or ribbons, to prepare Grignard reagents, which are useful in organic synthesis. As an additive agent in conventional propellants and the production of nodular graphite in cast iron. As a reducing agent to separate uranium and other metals from their salts. As a sacrificial (galvanic) anode to protect boats, underground tanks, pipelines, buried structures, and water heaters. Alloyed with zinc to produce the zinc sheet used in photoengraving plates in the printing industry, dry-cell battery walls, and roofing. As a metal, this element's principal use is as an alloying additive to aluminium with these aluminium-magnesium alloys being used mainly for beverage cans, sports equipment such as golf clubs, fishing reels, and archery bows and arrows. Specialty, high-grade car wheels of magnesium alloy are called "mag wheels", although the term is often misapplied to aluminium wheels. Many car and aircraft manufacturers have made engine and body parts from magnesium. Magnesium batteries have been commercialized as primary batteries, and are an active topic of research for rechargeable batteries. Safety precautions Magnesium metal and its alloys can be explosive hazards; they are highly flammable in their pure form when molten or in powder or ribbon form. Burning or molten magnesium reacts violently with water. When working with powdered magnesium, safety glasses with eye protection and UV filters (such as welders use) are employed because burning magnesium produces ultraviolet light that can permanently damage the retina of a human eye. Magnesium is capable of reducing water and releasing highly flammable hydrogen gas: Mg (s) + 2 (l) → (s) + (g) Therefore, water cannot extinguish magnesium fires. The hydrogen gas produced intensifies the fire. Dry sand is an effective smothering agent, but only on relatively level and flat surfaces. Magnesium reacts with carbon dioxide exothermically to form magnesium oxide and carbon: 2 Mg + → 2 MgO + C (s) Hence, carbon dioxide fuels rather than extinguishes magnesium fires. Burning magnesium can be quenched by using a Class D dry chemical fire extinguisher, or by covering the fire with sand or magnesium foundry flux to remove its air source. Useful compounds Magnesium compounds, primarily magnesium oxide (MgO), are used as a refractory material in furnace linings for producing iron, steel, nonferrous metals, glass, and cement. Magnesium oxide and other magnesium compounds are also used in the agricultural, chemical, and construction industries. Magnesium oxide from calcination is used as an electrical insulator in fire-resistant cables. Magnesium hydride is under investigation as a way to store hydrogen. Magnesium reacted with an alkyl halide gives a Grignard reagent, which is a very useful tool for preparing alcohols. Magnesium salts are included in various foods, fertilizers (magnesium is a component of chlorophyll), and microbe culture media. Magnesium sulfite is used in the manufacture of paper (sulfite process). Magnesium phosphate is used to fireproof wood used in construction. Magnesium hexafluorosilicate is used for moth-proofing textiles. Biological roles Mechanism of action The important interaction between phosphate and magnesium ions makes magnesium essential to the basic nucleic acid chemistry of all cells of all known living organisms. More than 300 enzymes require magnesium ions for their catalytic action, including all enzymes using or synthesizing ATP and those that use other nucleotides to synthesize DNA and RNA. The ATP molecule is normally found in a chelate with a magnesium ion. Nutrition Diet Spices, nuts, cereals, cocoa and vegetables are rich sources of magnesium. Green leafy vegetables such as spinach are also rich in magnesium. Beverages rich in magnesium are coffee, tea, and cocoa. Dietary recommendations In the UK, the recommended daily values for magnesium are 300 mg for men and 270 mg for women. In the U.S. the Recommended Dietary Allowances (RDAs) are 400 mg for men ages 19–30 and 420 mg for older; for women 310 mg for ages 19–30 and 320 mg for older. Supplementation Numerous pharmaceutical preparations of magnesium and dietary supplements are available. In two human trials magnesium oxide, one of the most common forms in magnesium dietary supplements because of its high magnesium content per weight, was less bioavailable than magnesium citrate, chloride, lactate or aspartate. Metabolism An adult body has 22–26 grams of magnesium, with 60% in the skeleton, 39% intracellular (20% in skeletal muscle), and 1% extracellular. Serum levels are typically 0.7–1.0 mmol/L or 1.8–2.4 mEq/L. Serum magnesium levels may be normal even when intracellular magnesium is deficient. The mechanisms for maintaining the magnesium level in the serum are varying gastrointestinal absorption and renal excretion. Intracellular magnesium is correlated with intracellular potassium. Increased magnesium lowers calcium and can either prevent hypercalcemia or cause hypocalcemia depending on the initial level. Both low and high protein intake conditions inhibit magnesium absorption, as does the amount of phosphate, phytate, and fat in the gut. Unabsorbed dietary magnesium is excreted in feces; absorbed magnesium is excreted in urine and sweat. Detection in serum and | by electrolysis of magnesium salts obtained from brine, and is used primarily as a component in aluminium-magnesium alloys, sometimes called magnalium or magnelium. Magnesium is less dense than aluminium, and the alloy is prized for its combination of lightness and strength. This element is the eleventh most abundant element by mass in the human body and is essential to all cells and some 300 enzymes. Magnesium ions interact with polyphosphate compounds such as ATP, DNA, and RNA. Hundreds of enzymes require magnesium ions to function. Magnesium compounds are used medicinally as common laxatives, antacids (e.g., milk of magnesia), and to stabilize abnormal nerve excitation or blood vessel spasm in such conditions as eclampsia. Characteristics Physical properties Elemental magnesium is a gray-white lightweight metal, two-thirds the density of aluminium. Magnesium has the lowest melting () and the lowest boiling point of all the alkaline earth metals. Pure polycrystalline magnesium is brittle and easily fractures along shear bands. It becomes much more ductile when alloyed with small amount of other metals, such as 1% aluminium. Ductility of polycrystalline magnesium can also be significantly improved by reducing its grain size to ca. 1 micron or less. Chemical properties General chemistry It tarnishes slightly when exposed to air, although, unlike the heavier alkaline earth metals, an oxygen-free environment is unnecessary for storage because magnesium is protected by a thin layer of oxide that is fairly impermeable and difficult to remove. Direct reaction of magnesium with air or oxygen at ambient pressure forms only the "normal" oxide MgO. However, this oxide may be combined with hydrogen peroxide to form Magnesium peroxide, MgO2, and at low temperature the peroxide may be further reacted with ozone to form magnesium superoxide Mg(O2)2. Magnesium reacts with water at room temperature, though it reacts much more slowly than calcium, a similar group 2 metal. When submerged in water, hydrogen bubbles form slowly on the surface of the metal – though, if powdered, it reacts much more rapidly. The reaction occurs faster with higher temperatures (see safety precautions). Magnesium's reversible reaction with water can be harnessed to store energy and run a magnesium-based engine. Magnesium also reacts exothermically with most acids such as hydrochloric acid (HCl), producing the metal chloride and hydrogen gas, similar to the HCl reaction with aluminium, zinc, and many other metals. Flammability Magnesium is highly flammable, especially when powdered or shaved into thin strips, though it is difficult to ignite in mass or bulk. Flame temperatures of magnesium and magnesium alloys can reach , although flame height above the burning metal is usually less than . Once ignited, such fires are difficult to extinguish, because combustion continues in nitrogen (forming magnesium nitride), carbon dioxide (forming magnesium oxide and carbon), and water (forming magnesium oxide and hydrogen, which also combusts due to heat in the presence of additional oxygen). This property was used in incendiary weapons during the firebombing of cities in World War II, where the only practical civil defense was to smother a burning flare under dry sand to exclude atmosphere from the combustion. Magnesium may also be used as an igniter for thermite, a mixture of aluminium and iron oxide powder that ignites only at a very high temperature. Organic chemistry Organomagnesium compounds are widespread in organic chemistry. They are commonly found as Grignard reagents. Magnesium can react with haloalkanes to give Grignard reagents. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. The Grignard reagents function as a common nucleophile, attacking the electrophilic group such as the carbon atom that is present within the polar bond of a carbonyl group. A prominent organomagnesium reagent beyond Grignard reagents is magnesium anthracene with magnesium forming a 1,4-bridge over the central ring. It is used as a source of highly active magnesium. The related butadiene-magnesium adduct serves as a source for the butadiene dianion. Magnesium in organic chemistry also appears as low valent magnesium compounds, primarily with the magnesium forming diatomic ions in the +1 oxidation state but more recently also with zero oxidation state or a mixture of +1 and zero states. Such compounds find synthetic application as reducing agents and sources of nucleophilic metal atoms. Source of light When burning in air, magnesium produces a brilliant-white light that includes strong ultraviolet wavelengths. Magnesium powder (flash powder) was used for subject illumination in the early days of photography. Later, magnesium filament was used in electrically ignited single-use photography flashbulbs. Magnesium powder is used in fireworks and marine flares where a brilliant white light is required. It was also used for various theatrical effects, such as lightning, pistol flashes, and supernatural appearances. Occurrence Magnesium is the eighth-most-abundant element in the Earth's crust by mass and tied in seventh place with iron in molarity. It is found in large deposits of magnesite, dolomite, and other minerals, and in mineral waters, where magnesium ion is soluble. Although magnesium is found in more than 60 minerals, only dolomite, magnesite, brucite, carnallite, talc, and olivine are of commercial importance. The cation is the second-most-abundant cation in seawater (about ⅛ the mass of sodium ions in a given sample), which makes seawater and sea salt attractive commercial sources for Mg. To extract the magnesium, calcium hydroxide is added to seawater to form magnesium hydroxide precipitate. + → + Magnesium hydroxide (brucite) is insoluble in water and can be filtered out and reacted with hydrochloric acid to produced concentrated magnesium chloride. + 2 HCl → + 2 From magnesium chloride, electrolysis produces magnesium. Forms Alloys As of 2013, magnesium alloys consumption was less than one million tonnes per year, compared with 50 million tonnes of aluminum alloys. Their use has been historically limited by the tendency of Mg alloys to corrode, creep at high temperatures, and combust. Corrosion The presence of iron, nickel, copper, and cobalt strongly activates corrosion. In more than trace amounts, these metals precipitate as intermetallic compounds, and the precipitate locales function as active cathodic sites that reduce water, causing the loss of magnesium. Controlling the quantity of these metals improves corrosion resistance. Sufficient manganese overcomes the corrosive effects of iron. This requires precise control over composition, increasing costs. Adding a cathodic poison captures atomic hydrogen within the structure of a metal. This prevents the formation of free hydrogen gas, an essential factor of corrosive chemical processes. The addition of about one in three hundred parts arsenic reduces its corrosion rate in a salt solution by a factor of nearly ten. High-temperature creep and flammability Research showed that magnesium's tendency to creep at high temperatures is eliminated by the addition of scandium and gadolinium. Flammability is greatly reduced by a small amount of calcium in the alloy. By using rare-earth elements, it may be possible to manufacture magnesium alloys with an ignition temperature higher than magnesium's liquidus and in some cases potentially pushing it close to magnesium's boiling point. Compounds Magnesium forms a variety of compounds important to industry and biology, including magnesium carbonate, magnesium chloride, magnesium citrate, magnesium hydroxide (milk |
primitive document management system intended for law firms in 1969, and helped invent IBM GML later that same year. GML was first publicly disclosed in 1973. In 1975, Goldfarb moved from Cambridge, Massachusetts to Silicon Valley and became a product planner at the IBM Almaden Research Center. There, he convinced IBM's executives to deploy GML commercially in 1978 as part of IBM's Document Composition Facility product, and it was widely used in business within a few years. SGML, which was based on both GML and GenCode, was an ISO project worked on by Goldfarb beginning in 1974. Goldfarb eventually became chair of the SGML committee. SGML was first released by ISO as the ISO 8879 standard in October 1986. troff and nroff Some early examples of computer markup languages available outside the publishing industry can be found in typesetting tools on Unix systems such as troff and nroff. In these systems, formatting commands were inserted into the document text so that typesetting software could format the text according to the editor's specifications. It was a trial and error iterative process to get a document printed correctly. Availability of WYSIWYG ("what you see is what you get") publishing software supplanted much use of these languages among casual users, though serious publishing work still uses markup to specify the non-visual structure of texts, and WYSIWYG editors now usually save documents in a markup-language-based format. TeX Another major publishing standard is TeX, created and refined by Donald Knuth in the 1970s and '80s. TeX concentrated on detailed layout of text and font descriptions to typeset mathematical books. This required Knuth to spend considerable time investigating the art of typesetting. TeX is mainly used in academia, where it is a de facto standard in many scientific disciplines. A TeX macro package known as LaTeX provides a descriptive markup system on top of TeX, and is widely used both among the scientific community and the publishing industry. Scribe, GML and SGML The first language to make a clean distinction between structure and presentation was Scribe, developed by Brian Reid and described in his doctoral thesis in 1980. Scribe was revolutionary in a number of ways, not least that it introduced the idea of styles separated from the marked up document, and of a grammar controlling the usage of descriptive elements. Scribe influenced the development of Generalized Markup Language (later SGML), and is a direct ancestor to HTML and LaTeX. In the early 1980s, the idea that markup should focus on the structural aspects of a document and leave the visual presentation of that structure to the interpreter led to the creation of SGML. The language was developed by a committee chaired by Goldfarb. It incorporated ideas from many different sources, including Tunnicliffe's project, GenCode. Sharon Adler, Anders Berglund, and James A. Marke were also key members of the SGML committee. SGML specified a syntax for including the markup in documents, as well as one for separately describing what tags were allowed, and where (the Document Type Definition (DTD), later known as a schema). This allowed authors to create and use any markup they wished, selecting tags that made the most sense to them and were named in their own natural languages, while also allowing automated verification. Thus, SGML is properly a meta-language, and many particular markup languages are derived from it. From the late '80s onward, most substantial new markup languages have been based on the SGML system, including for example TEI and DocBook. SGML was promulgated as an International Standard by International Organization for Standardization, ISO 8879, in 1986. SGML found wide acceptance and use in fields with very large-scale documentation requirements. However, many found it cumbersome and difficult to learn — a side effect of its design attempting to do too much and to be too flexible. For example, SGML made end tags (or start-tags, or even both) optional in certain contexts, because its developers thought markup would be done manually by overworked support staff who would appreciate saving keystrokes. HTML In 1989, computer scientist Sir Tim Berners-Lee wrote a memo proposing an Internet-based hypertext system, then specified HTML and wrote the browser and server software in the last part of 1990. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house SGML-based documentation format at CERN, and very similar to the sample schema in the SGML standard. Eleven of these elements still exist in HTML 4. Berners-Lee considered HTML an SGML application. The Internet Engineering Task Force (IETF) formally defined it as such with the mid-1993 publication of the first proposal for an HTML specification: "Hypertext Markup Language (HTML)" Internet-Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar. Many of the HTML text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system. These formatting commands were derived from those used by typesetters to manually format documents. Steven DeRose argues that HTML's use of descriptive markup (and influence of SGML in particular) was a major factor in the success of the Web, because of the flexibility and extensibility that it enabled. HTML became the main markup language for creating web pages and other information that can be displayed in a web browser, and is quite likely the most used markup language in the world today. XML XML (Extensible Markup Language) is a meta markup language that is very widely used. XML was developed by the World Wide Web Consortium, in a committee created and chaired by Jon Bosak. The main purpose of XML was to simplify SGML by focusing on a particular problem — documents on the Internet. XML remains a meta-language like SGML, allowing users to create any tags needed (hence "extensible") and then describing those tags and their permitted uses. XML adoption was helped because every XML document can be written in such a way that it is also an SGML document, and existing SGML users and software could switch to XML fairly easily. However, XML eliminated many of the more complex features of SGML to simplify implementation environments such as documents and publications. It appeared to strike a happy medium between simplicity and flexibility, as well as supporting very robust schema definition and validation tools, and was rapidly adopted for many other uses. XML is now widely used for communicating data between applications, for serializing program data, for hardware communications protocols, vector graphics, and many other uses as well as documents. XHTML From January 2000 until HTML 5 was released, all W3C Recommendations for HTML have been based on XML, using the abbreviation XHTML (Extensible HyperText Markup Language). The language specification requires that XHTML Web documents be well-formed XML documents. This allows for more rigorous and robust documents, by avoiding many syntax errors which historically led to incompatible browser behaviors, while still using document components that are familiar from HTML. One of the most noticeable differences between HTML and XHTML is the rule that all tags must be closed: empty HTML tags such as <br> must either be closed with a regular end-tag, or replaced by a special form: (the space before the '/' on the end tag is optional, but frequently used because it enables some pre-XML Web browsers, and SGML parsers, to accept the tag). Another difference is that all attribute values in tags must be quoted. Both these differences are commonly criticised as verbose, but also praised because they make it far easier to detect, localize, and repair errors. Finally, all tag and attribute names within the XHTML namespace must be lowercase to be valid. HTML, on the other hand, was case-insensitive. Other XML-based applications Many XML-based applications now exist, including the Resource Description Framework as RDF/XML, XForms, DocBook, SOAP, and the Web Ontology Language (OWL). For a partial list of these, see List of XML markup languages. Features of markup languages A common feature of many markup languages is that they intermix the text of a document with markup instructions in the same data stream or file. This is not necessary; it is possible to isolate markup from | do too much and to be too flexible. For example, SGML made end tags (or start-tags, or even both) optional in certain contexts, because its developers thought markup would be done manually by overworked support staff who would appreciate saving keystrokes. HTML In 1989, computer scientist Sir Tim Berners-Lee wrote a memo proposing an Internet-based hypertext system, then specified HTML and wrote the browser and server software in the last part of 1990. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house SGML-based documentation format at CERN, and very similar to the sample schema in the SGML standard. Eleven of these elements still exist in HTML 4. Berners-Lee considered HTML an SGML application. The Internet Engineering Task Force (IETF) formally defined it as such with the mid-1993 publication of the first proposal for an HTML specification: "Hypertext Markup Language (HTML)" Internet-Draft by Berners-Lee and Dan Connolly, which included an SGML Document Type Definition to define the grammar. Many of the HTML text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system. These formatting commands were derived from those used by typesetters to manually format documents. Steven DeRose argues that HTML's use of descriptive markup (and influence of SGML in particular) was a major factor in the success of the Web, because of the flexibility and extensibility that it enabled. HTML became the main markup language for creating web pages and other information that can be displayed in a web browser, and is quite likely the most used markup language in the world today. XML XML (Extensible Markup Language) is a meta markup language that is very widely used. XML was developed by the World Wide Web Consortium, in a committee created and chaired by Jon Bosak. The main purpose of XML was to simplify SGML by focusing on a particular problem — documents on the Internet. XML remains a meta-language like SGML, allowing users to create any tags needed (hence "extensible") and then describing those tags and their permitted uses. XML adoption was helped because every XML document can be written in such a way that it is also an SGML document, and existing SGML users and software could switch to XML fairly easily. However, XML eliminated many of the more complex features of SGML to simplify implementation environments such as documents and publications. It appeared to strike a happy medium between simplicity and flexibility, as well as supporting very robust schema definition and validation tools, and was rapidly adopted for many other uses. XML is now widely used for communicating data between applications, for serializing program data, for hardware communications protocols, vector graphics, and many other uses as well as documents. XHTML From January 2000 until HTML 5 was released, all W3C Recommendations for HTML have been based on XML, using the abbreviation XHTML (Extensible HyperText Markup Language). The language specification requires that XHTML Web documents be well-formed XML documents. This allows for more rigorous and robust documents, by avoiding many syntax errors which historically led to incompatible browser behaviors, while still using document components that are familiar from HTML. One of the most noticeable differences between HTML and XHTML is the rule that all tags must be closed: empty HTML tags such as <br> must either be closed with a regular end-tag, or replaced by a special form: (the space before the '/' on the end tag is optional, but frequently used because it enables some pre-XML Web browsers, and SGML parsers, to accept the tag). Another difference is that all attribute values in tags must be quoted. Both these differences are commonly criticised as verbose, but also praised because they make it far easier to detect, localize, and repair errors. Finally, all tag and attribute names within the XHTML namespace must be lowercase to be valid. HTML, on the other hand, was case-insensitive. Other XML-based applications Many XML-based applications now exist, including the Resource Description Framework as RDF/XML, XForms, DocBook, SOAP, and the Web Ontology Language (OWL). For a partial list of these, see List of XML markup languages. Features of markup languages A common feature of many markup languages is that they intermix the text of a document with markup instructions in the same data stream or file. This is not necessary; it is possible to isolate markup from text content, using pointers, offsets, IDs, or other methods to co-ordinate the two. Such "standoff markup" is typical for the internal representations that programs use to work with marked-up documents. However, embedded or "inline" markup is much more common elsewhere. Here, for example, is a small section of text marked up in HTML: <h1>Anatidae</h1> <p> The family <i>Anatidae</i> includes ducks, geese, and swans, but <em>not</em> the closely related screamers. </p> The codes enclosed in angle-brackets <like this> are markup instructions (known as tags), while the text between these instructions is the actual text of the document. The codes h1, p, and em are examples of semantic markup, in that they describe the intended purpose or the meaning of the text they include. Specifically, h1 means "this is a first-level heading", p means "this is a paragraph", and em means "this is an emphasized word or phrase". A program interpreting such structural markup may apply its own rules or styles for presenting the various pieces of text, using different typefaces, boldness, font size, indentation, colour, or other styles, as desired. For example, a tag such as "h1" (header level 1) might be presented in a large bold sans-serif typeface in an article, or it might be underscored in a monospaced (typewriter-style) document – or it might simply not change the presentation at all. In contrast, the i tag in HTML 4 is an example of presentational markup, which is generally used to specify a particular characteristic of the text without specifying the reason for that appearance. In this case, the i element dictates the use of an italic typeface. However, in HTML 5, this element has been repurposed with a more semantic usage: to denote a span of text in an alternate voice or mood, or |
semiotics and sociology Meaning (semiotics), the distribution of signs in sign relations Meaning (existential), the meaning of life in contemporary existentialism Arts and entertainment Meanings (album), a 2004 album by Gad Elbaz "Meaning" (House), a 2006 episode of the TV series House Meaning (music), the philosophical question of meaning in relation to music "The Meaning", a song on Discipline (Janet Jackson album) (2008) The Meaning (album), a 2011 album | (album), a 2004 album by Gad Elbaz "Meaning" (House), a 2006 episode of the TV series House Meaning (music), the philosophical question of meaning in relation to music "The Meaning", a song on Discipline (Janet Jackson album) (2008) The Meaning (album), a 2011 album by Layzie Bone See also Hermeneutics, the theory of text interpretation Linguistics, the scientific study of language Logotherapy, psychotherapy based on |
denies that there are moral truths, error theory entails moral nihilism and, thus, moral skepticism; however, neither moral nihilism nor moral skepticism conversely entail error theory. Non-cognitivist theories Non-cognitivist theories hold that ethical sentences are neither true nor false because they do not express genuine propositions. Non-cognitivism is another form of moral anti-realism. Most forms of non-cognitivism are also forms of expressivism, however some such as Mark Timmons and Terrence Horgan distinguish the two and allow the possibility of cognitivist forms of expressivism. Non-cognitivism includes: Emotivism, defended by A. J. Ayer and Charles Stevenson, holds that ethical sentences serve merely to express emotions. Ayer argues that ethical sentences are expressions of approval or disapproval, not assertions. So "Killing is wrong" means something like "Boo on killing!". Quasi-realism, defended by Simon Blackburn, holds that ethical statements behave linguistically like factual claims and can be appropriately called "true" or "false", even though there are no ethical facts for them to correspond to. Projectivism and moral fictionalism are related theories. Universal prescriptivism, defended by R. M. Hare, holds that moral statements function like universalized imperative sentences. So "Killing is wrong" means something like "Don't kill!" Hare's version of prescriptivism requires that moral prescriptions be universalizable, and hence actually have objective values, in spite of failing to be indicative statements with truth-values per se. Centralism and non-centralism Yet another way of categorizing meta-ethical theories is to distinguish between centralist and non-centralist moral theories. The debate between centralism and non-centralism revolves around the relationship between the so-called "thin" and "thick" concepts of morality: thin moral concepts are those such as good, bad, right, and wrong; thick moral concepts are those such as courageous, inequitable, just, or dishonest. While both sides agree that the thin concepts are more general and the thick more specific, centralists hold that the thin concepts are antecedent to the thick ones and that the latter are therefore dependent on the former. That is, centralists argue that one must understand words like "right" and "ought" before understanding words like "just" and "unkind." Non-centralism rejects this view, holding that thin and thick concepts are on par with one another and even that the thick concepts are a sufficient starting point for understanding the thin ones. Non-centralism has been of particular importance to ethical naturalists in the late 20th and early 21st centuries as part of their argument that normativity is a non-excisable aspect of language and that there is no way of analyzing thick moral concepts into a purely descriptive element attached to a thin moral evaluation, thus undermining any fundamental division between facts and norms. Allan Gibbard, R. M. Hare, and Simon Blackburn have argued in favor of the fact/norm distinction, meanwhile, with Gibbard going so far as to argue that, even if conventional English has only mixed normative terms (that is, terms that are neither purely descriptive nor purely normative), we could develop a nominally English metalanguage that still allowed us to maintain the division between factual descriptions and normative evaluations. Moral ontology Moral ontology attempts to answer question, "What is the nature of moral judgments?" Amongst those who believe there to be some standard(s) of morality (as opposed to moral nihilists), there are two divisions: universalists, who hold that the same moral facts or principles apply to everyone everywhere; and relativists, who hold that different moral facts or principles apply to different people or societies. Moral universalism Moral universalism (or universal morality) is the meta-ethical position that some system of ethics, or a universal ethic, applies universally, that is to all intelligent beings regardless of culture, race, sex, religion, nationality, sexuality, or other distinguishing feature. The source or justification of this system may be thought to be, for instance, human nature, shared vulnerability to suffering, the demands of universal reason, what is common among existing moral codes, or the common mandates of religion (although it can be argued that the latter is not in fact moral universalism because it may distinguish between Gods and mortals). Moral universalism is the opposing position to various forms of moral relativism. Universalist theories are generally forms of moral realism, though exceptions exists, such as the subjectivist ideal observer and divine command theories, and the non-cognitivist universal prescriptivism of R. M. Hare. Forms of moral universalism include: Value monism is the common form of universalism, which holds that all goods are commensurable on a single value scale. Value pluralism contends that there are two or more genuine scales of value, knowable as such, yet incommensurable, so that any prioritization of these values is either non-cognitive or subjective. A value pluralist might, for example, contend that both a life as a nun and a life as a mother realize genuine values (in a universalist sense), yet they are incompatible (nuns may not have children), and there is no purely rational way to measure which is preferable. A notable proponent of this view is Isaiah Berlin. Moral relativism Moral relativism maintains that all moral judgments have their origins either in societal or in individual standards, and that no single standard exists by which one can objectively assess the truth of a moral proposition. Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal | naturalists in the late 20th and early 21st centuries as part of their argument that normativity is a non-excisable aspect of language and that there is no way of analyzing thick moral concepts into a purely descriptive element attached to a thin moral evaluation, thus undermining any fundamental division between facts and norms. Allan Gibbard, R. M. Hare, and Simon Blackburn have argued in favor of the fact/norm distinction, meanwhile, with Gibbard going so far as to argue that, even if conventional English has only mixed normative terms (that is, terms that are neither purely descriptive nor purely normative), we could develop a nominally English metalanguage that still allowed us to maintain the division between factual descriptions and normative evaluations. Moral ontology Moral ontology attempts to answer question, "What is the nature of moral judgments?" Amongst those who believe there to be some standard(s) of morality (as opposed to moral nihilists), there are two divisions: universalists, who hold that the same moral facts or principles apply to everyone everywhere; and relativists, who hold that different moral facts or principles apply to different people or societies. Moral universalism Moral universalism (or universal morality) is the meta-ethical position that some system of ethics, or a universal ethic, applies universally, that is to all intelligent beings regardless of culture, race, sex, religion, nationality, sexuality, or other distinguishing feature. The source or justification of this system may be thought to be, for instance, human nature, shared vulnerability to suffering, the demands of universal reason, what is common among existing moral codes, or the common mandates of religion (although it can be argued that the latter is not in fact moral universalism because it may distinguish between Gods and mortals). Moral universalism is the opposing position to various forms of moral relativism. Universalist theories are generally forms of moral realism, though exceptions exists, such as the subjectivist ideal observer and divine command theories, and the non-cognitivist universal prescriptivism of R. M. Hare. Forms of moral universalism include: Value monism is the common form of universalism, which holds that all goods are commensurable on a single value scale. Value pluralism contends that there are two or more genuine scales of value, knowable as such, yet incommensurable, so that any prioritization of these values is either non-cognitive or subjective. A value pluralist might, for example, contend that both a life as a nun and a life as a mother realize genuine values (in a universalist sense), yet they are incompatible (nuns may not have children), and there is no purely rational way to measure which is preferable. A notable proponent of this view is Isaiah Berlin. Moral relativism Moral relativism maintains that all moral judgments have their origins either in societal or in individual standards, and that no single standard exists by which one can objectively assess the truth of a moral proposition. Meta-ethical relativists, in general, believe that the descriptive properties of terms such as "good", "bad", "right", and "wrong" do not stand subject to universal truth conditions, but only to societal convention and personal preference. Given the same set of verifiable facts, some societies or individuals will have a fundamental disagreement about what one ought to do based on societal or individual norms, and one cannot adjudicate these using some independent standard of evaluation. The latter standard will always be societal or personal and not universal, unlike, for example, the scientific standards for assessing temperature or for determining mathematical truths. Moral nihilism Moral nihilism, also known as ethical nihilism, is the meta-ethical view that nothing has intrinsic moral value. For example, a moral nihilist would say that killing someone, for whatever reason, is intrinsically neither morally right nor morally wrong. Moral nihilism must be distinguished from moral relativism, which does allow for moral statements to be intrinsically true or false in a non-universal sense, but does not assign any static truth-values to moral statements. Insofar as only true statements can be known, moral nihilists are moral skeptics. Most forms of moral nihilism are non-cognitivist and vice versa, though |
department, France Montesquieu, Tarn-et-Garonne, commune in the Tarn-et-Garonne department, France | in the Lot-et-Garonne department, France Montesquieu, Tarn-et-Garonne, commune in the Tarn-et-Garonne |
of irrigation that allowed them to build a farming community in the desert. From 1849 to 1852, the Mormons greatly expanded their missionary efforts, establishing several missions in Europe, Latin America, and the South Pacific. Converts were expected to "gather" to Zion, and during Young's presidency (1847–77) over seventy thousand Mormon converts immigrated to America. Many of the converts came from England and Scandinavia, and were quickly assimilated into the Mormon community. Many of these immigrants crossed the Great Plains in wagons drawn by oxen, while some later groups pulled their possessions in small handcarts. During the 1860s, newcomers began using the new railroad that was under construction. In 1852, church leaders publicized the previously secret practice of plural marriage, a form of polygamy. Over the next 50 years, many Mormons (between 20 and 30 percent of Mormon families) entered into plural marriages as a religious duty, with the number of plural marriages reaching a peak around 1860, and then declining through the rest of the century. Besides the doctrinal reasons for plural marriage, the practice made some economic sense, as many of the plural wives were single women who arrived in Utah without brothers or fathers to offer them societal support. By 1857, tensions had again escalated between Mormons and other Americans, largely as a result of accusations involving polygamy and the theocratic rule of the Utah Territory by Brigham Young. In 1857, U.S. President James Buchanan sent an army to Utah, which Mormons interpreted as open aggression against them. Fearing a repeat of Missouri and Illinois, the Mormons prepared to defend themselves, determined to torch their own homes in the case that they were invaded. The relatively peaceful Utah War ensued from 1857 to 1858, in which the most notable instance of violence was the Mountain Meadows massacre, when leaders of a local Mormon militia ordered the killing of a civilian emigrant party that was traveling through Utah during the escalating tensions. In 1858, Young agreed to step down from his position as governor and was replaced by a non-Mormon, Alfred Cumming. Nevertheless, the LDS Church still wielded significant political power in the Utah Territory. At Young's death in 1877, he was followed by other LDS Church presidents, who resisted efforts by the United States Congress to outlaw Mormon polygamous marriages. In 1878, the U.S. Supreme Court ruled in Reynolds v. United States that religious duty was not a suitable defense for practicing polygamy, and many Mormon polygamists went into hiding; later, Congress began seizing church assets. In September 1890, church president Wilford Woodruff issued a Manifesto that officially suspended the practice of polygamy. Although this Manifesto did not dissolve existing plural marriages, relations with the United States markedly improved after 1890, such that Utah was admitted as a U.S. state in 1896. After the Manifesto, some Mormons continued to enter into polygamous marriages, but these eventually stopped in 1904 when church president Joseph F. Smith disavowed polygamy before Congress and issued a "Second Manifesto" calling for all plural marriages in the church to cease. Eventually, the church adopted a policy of excommunicating members found practicing polygamy, and today seeks actively to distance itself from "fundamentalist" groups that continue the practice. Modern times During the early 20th century, Mormons began to reintegrate into the American mainstream. In 1929, the Mormon Tabernacle Choir began broadcasting a weekly performance on national radio, becoming an asset for public relations. Mormons emphasized patriotism and industry, rising in socioeconomic status from the bottom among American religious denominations to middle-class. In the 1920s and 1930s, Mormons began migrating out of Utah, a trend hurried by the Great Depression, as Mormons looked for work wherever they could find it. As Mormons spread out, church leaders created programs that would help preserve the tight-knit community feel of Mormon culture. In addition to weekly worship services, Mormons began participating in numerous programs such as Boy Scouting, a Young Women organization, church-sponsored dances, ward basketball, camping trips, plays, and religious education programs for youth and college students. During the Great Depression, the church started a welfare program to meet the needs of poor members, which has since grown to include a humanitarian branch that provides relief to disaster victims. During the later half of the 20th century, there was a retrenchment movement in Mormonism in which Mormons became more conservative, attempting to regain their status as a "peculiar people". Though the 1960s and 1970s brought changes such as Women's Liberation and the civil rights movement, Mormon leaders were alarmed by the erosion of traditional values, the sexual revolution, the widespread use of recreational drugs, moral relativism, and other forces they saw as damaging to the family. Partly to counter this, Mormons put an even greater emphasis on family life, religious education, and missionary work, becoming more conservative in the process. As a result, Mormons today are probably less integrated with mainstream society than they were in the early 1960s. Although black people have been members of Mormon congregations since Joseph Smith's time, before 1978, black membership was small. From 1852 to 1978, the LDS Church enforced a policy that restricted men of black African descent from being ordained to the church's lay priesthood. The church was sharply criticized for its policy during the civil rights movement, but the policy remained in force until a 1978 reversal that was prompted in part by questions about mixed-race converts in Brazil. In general, Mormons greeted the change with joy and relief. Since 1978, black membership has grown, and in 1997 there were approximately 500,000 black members of the church (about 5 percent of the total membership), mostly in Africa, Brazil and the Caribbean. Black membership has continued to grow substantially, especially in West Africa, where two temples have been built. Some black Mormons are members of the Genesis Group, an organization of black members that predates the priesthood ban, and is endorsed by the church. The LDS Church grew rapidly after World War II and became a worldwide organization as missionaries were sent across the globe. The church doubled in size every 15 to 20 years, and by 1996, there were more Mormons outside the United States than inside. In 2012, there were an estimated 14.8 million Mormons, with roughly 57 percent living outside the United States. It is estimated that approximately 4.5 million Mormons – roughly 30% of the total membership – regularly attend services. A majority of U.S. Mormons are white and non-Hispanic (84 percent). Most Mormons are distributed in North and South America, the South Pacific, and Western Europe. The global distribution of Mormons resembles a contact diffusion model, radiating out from the organization's headquarters in Utah. The church enforces general doctrinal uniformity, and congregations on all continents teach the same doctrines, and international Mormons tend to absorb a good deal of Mormon culture, possibly because of the church's top-down hierarchy and a missionary presence. However, international Mormons often bring pieces of their own heritage into the church, adapting church practices to local cultures. As of December 2019, the LDS Church reported having 16,565,036 members worldwide. Chile, Uruguay, and several areas in the South Pacific have a higher percentage of Mormons than the United States (which is at about 2 percent). South Pacific countries and dependencies that are more than 10 percent Mormon include American Samoa, the Cook Islands, Kiribati, Niue, Samoa, and Tonga. | the central doctrinal issues that defined Mormonism in the 19th century was the practice of plural marriage, a form of religious polygamy. From 1852 until 1904, when the LDS Church banned the practice, many Mormons who had followed Brigham Young to the Utah Territory openly practiced polygamy. Mormons dedicate significant time and resources to serving in their churches. A prominent practice among young and retired members of the LDS Church is to serve a full-time proselytizing mission. Mormons have a health code which eschews alcoholic beverages, tobacco, tea, coffee, and addictive substances. They tend to be very family-oriented and have strong connections across generations and with extended family, reflective of their belief that families can be sealed together beyond death. They also have a strict law of chastity, requiring abstention from sexual relations outside heterosexual marriage and fidelity within marriage, though the Community of Christ is accepting of LGBTQ individuals and relationships. Mormons self-identify as Christian, but some non-Mormons consider Mormons to be non-Christian because some of their beliefs differ from those of Nicene Christianity. Mormons believe that Christ's church was restored through Joseph Smith and is guided by living prophets and apostles. Mormons believe in the Bible, as well as other books of scripture, such as the Book of Mormon. They have a unique view of cosmology and believe that all people are literal spirit-children of God. Mormons believe that returning to God requires following the example of Jesus Christ, and accepting his atonement through repentance and ordinances such as baptism. During the 19th century, Mormon converts tended to gather to a central geographic location, a trend that reversed somewhat in the 1920s and 30s. The center of Mormon cultural influence is in Utah, and North America has more Mormons than any other continent, although the majority of Mormons live outside the United States. As of December 2020, the LDS Church reported having 16,663,663 members worldwide. Terminology The word Mormon was originally coined to describe any person who believes in the Book of Mormon as a volume of scripture. The term Mormonite and Mormon were originally descriptive terms used by outsiders to the faith and occasionally used by church leaders. The term Mormon later evolved into a derogatory term, likely during the 1838 Mormon War, although the term was later adopted by Joseph Smith. Today, while the term Mormonism can act as a blanket term for all sects following the religious tradition started by Joseph Smith, many sects do not prefer the term Mormon as an acceptable label. For example, the largest sect, The Church of Jesus Christ of Latter-day Saints, based in Salt Lake City, recently clarified in a style guide that it prefers the term Latter-day Saints among other acceptable terms. The term preferred by the Salt Lake-based LDS church has varied in the past, and at various points it has embraced the term Mormon and also stated that other sects within the shared faith tradition should not be called Mormon. The second-largest sect, the Community of Christ, also rejects the term Mormon due to its association with the practice of polygamy among Brighamite sects. Other sects, including several fundamentalist branches of the Brighamite tradition, embrace the term Mormon. History The history of the Mormons has shaped them into a people with a strong sense of unity and commonality. From the start, Mormons have tried to establish what they call "Zion", a utopian society of the righteous. Mormon history can be divided into three broad time periods: (1) the early history during the lifetime of Joseph Smith, (2) a "pioneer era" under the leadership of Brigham Young and his successors, and (3) a modern era beginning around the turn of the 20th century. In the first period, Smith attempted to build a city called Zion, in which converts could gather. During the pioneer era, Zion became a "landscape of villages" in Utah. In modern times, Zion is still an ideal, though Mormons gather together in their individual congregations rather than a central geographic location. Beginnings The Mormon movement began with the publishing of the Book of Mormon in March 1830, which Smith claimed was a translation of golden plates containing the religious history of an ancient American civilization which had been compiled by the ancient prophet-historian Mormon. Smith claimed that an angel had directed him to the golden plates, buried in the Hill Cumorah. On April 6, 1830, Smith founded the Church of Christ. In 1832, Smith added an account of a vision he had sometime in the early 1820s while living in Upstate New York . This vision would come to be regarded by some Mormons as the most important event in human history after the birth, ministry, and resurrection of Jesus Christ. The early church grew westward as Smith sent missionaries to proselytize. In 1831, the church moved to Kirtland, Ohio where missionaries had made a large number of converts and Smith began establishing an outpost in Jackson County, Missouri, where he planned to eventually build the city of Zion (or the New Jerusalem). In 1833, Missouri settlers, alarmed by the rapid influx of Mormons, expelled them from Jackson County into the nearby Clay County, where local residents were more welcoming. After Smith led a mission, known as Zion's Camp, to recover the land, he began building Kirtland Temple in Lake County, Ohio, where the church flourished. When the Missouri Mormons were later asked to leave Clay County in 1836, they secured land in what would become Caldwell County. The Kirtland era ended in 1838, after the failure of a church-sponsored anti-bank caused widespread defections, and Smith regrouped with the remaining church in Far West, Missouri. During the fall of 1838, tensions escalated into the Mormon War with the old Missouri settlers. On October 27, the governor of Missouri ordered that the Mormons "must be treated as enemies" and be exterminated or driven from the state. Between November and April, some eight thousand displaced Mormons migrated east into Illinois. In 1839, the Mormons purchased the small town of Commerce, converted swampland on the banks of the Mississippi River, and renamed the area Nauvoo, Illinois, and began construction of the Nauvoo Temple. The city became the church's new headquarters and gathering place, and it grew rapidly, fueled in part by converts immigrating from Europe. Meanwhile, Smith introduced temple ceremonies meant to seal families together for eternity, as well as the doctrines of eternal progression or exaltation, and plural marriage. Smith created a service organization for women called the Relief Society, as well as an organization called the Council of Fifty, representing a future theodemocratic "Kingdom of God" on the earth. Smith also published the story of his First Vision, in which the Father and the Son appeared to him while he was about 14 years old. This vision would come to be regarded by some Mormons as the most important event in human history after the birth, ministry, and resurrection of Jesus Christ. In 1844, local prejudices and political tensions, fueled by Mormon peculiarity, internal dissent, and reports of polygamy, escalated into conflicts between Mormons and "anti-Mormons" in Illinois and Missouri. Smith was arrested, and on June 27, 1844, he and his brother Hyrum were killed by a mob in Carthage, Illinois. Because Hyrum was Smith's logical successor, their deaths caused a succession crisis, and Brigham Young assumed leadership over the majority of Latter Day Saints. Young had been a close associate of Smith's and was senior apostle of the Quorum of the Twelve. Smaller groups of Latter Day Saints followed other leaders to form other denominations of the Latter Day Saint movement. Pioneer era For two years after Smith's death, conflicts escalated between Mormons and other Illinois residents. To prevent war, Brigham Young led the Mormon pioneers (constituting most of the Latter Day Saints) to a temporary winter quarters in Nebraska and then, eventually (beginning in 1847), to what became the Utah Territory. Having failed to build Zion within the confines of American society, the Mormons began to construct a society in isolation, based on their beliefs and values. The cooperative ethic that Mormons had developed over the last decade and a half became important as settlers branched out and colonized a large desert region now known as the Mormon Corridor. Colonizing efforts were seen as religious duties, and the new villages were governed by the Mormon bishops (local lay religious leaders). The Mormons viewed land as commonwealth, devising and maintaining a co-operative system of irrigation that allowed them to build a farming community in the desert. From 1849 to 1852, the Mormons greatly expanded their missionary efforts, establishing several missions in Europe, Latin America, and the South Pacific. Converts were expected to "gather" to Zion, and during Young's presidency (1847–77) over seventy thousand Mormon converts immigrated to America. Many of the converts came from England and Scandinavia, and were quickly assimilated into the Mormon community. Many of these immigrants crossed the Great Plains in wagons drawn by oxen, while some later groups pulled their possessions in small handcarts. During the 1860s, newcomers began using the new railroad that was under construction. In 1852, church leaders publicized the previously secret practice of plural marriage, a form of polygamy. Over the next 50 years, many Mormons (between 20 and 30 percent of Mormon families) entered into plural marriages as a religious duty, with the number of plural marriages reaching a peak around 1860, and then declining through the rest of the century. Besides the doctrinal reasons for plural marriage, the practice made some economic sense, as many of the plural wives were single women who arrived in Utah without brothers or fathers to offer them societal support. By 1857, tensions had again escalated between Mormons and other Americans, largely as a result of accusations involving polygamy and the theocratic rule of the Utah Territory by Brigham Young. In 1857, U.S. President James Buchanan sent an army to Utah, which Mormons interpreted as open aggression against them. Fearing a repeat of Missouri and Illinois, the Mormons prepared to defend themselves, determined to torch their own homes in the case that they were invaded. The relatively peaceful Utah War ensued from 1857 to 1858, in which the most notable instance of violence was the Mountain Meadows massacre, when leaders of a local Mormon militia ordered the killing of a civilian emigrant party that was traveling through Utah during the escalating tensions. In 1858, Young agreed to step down from his position as governor and was replaced by a non-Mormon, Alfred Cumming. Nevertheless, the LDS Church still wielded significant political power in the Utah |
the Panama Canal in 1914, which reduced reliance on transcontinental railways for trade, as well as a decrease in immigration due to the outbreak of the First World War. Over 18,000 Manitoba residents enlisted in the first year of the war; by the end of the war, 14 Manitobans had received the Victoria Cross. During the First World War, Nellie McClung started the campaign for women's votes. On January 28, 1916, the vote for women was legalized. Manitoba was the first province to allow women to vote in provincial elections. This was two years before Canada as a country granted women the right to vote. After the First World War ended, severe discontent among farmers (over wheat prices) and union members (over wage rates) resulted in an upsurge of radicalism, coupled with a polarization over the rise of Bolshevism in Russia. The most dramatic result was the Winnipeg general strike of 1919. It began on 15 May and collapsed on 25 June 1919; as the workers gradually returned to their jobs, the Central Strike Committee decided to end the movement. Government efforts to violently crush the strike, including a Royal North-West Mounted Police charge into a crowd of protesters that resulted in multiple casualties and one death, had led to the arrest of the movement's leaders. In the aftermath, eight leaders went on trial, and most were convicted on charges of seditious conspiracy, illegal combinations, and seditious libel; four were deported under the Canadian Immigration Act. The Great Depression (1929–c. 1939) hit especially hard in Western Canada, including Manitoba. The collapse of the world market combined with a steep drop in agricultural production due to drought led to economic diversification, moving away from a reliance on wheat production. The Manitoba Co-operative Commonwealth Federation, forerunner to the New Democratic Party of Manitoba (NDP), was founded in 1932. Canada entered the Second World War in 1939. Winnipeg was one of the major commands for the British Commonwealth Air Training Plan to train fighter pilots, and there were air training schools throughout Manitoba. Several Manitoba-based regiments were deployed overseas, including Princess Patricia's Canadian Light Infantry. In an effort to raise money for the war effort, the Victory Loan campaign organized "If Day" in 1942. The event featured a simulated Nazi invasion and occupation of Manitoba, and eventually raised over C$65 million. Winnipeg was inundated during the 1950 Red River Flood and had to be partially evacuated. In that year, the Red River reached its highest level since 1861 and flooded most of the Red River Valley. The damage caused by the flood led then-Premier Duff Roblin to advocate for the construction of the Red River Floodway; it was completed in 1968 after six years of excavation. Permanent dikes were erected in eight towns south of Winnipeg, and clay dikes and diversion dams were built in the Winnipeg area. In 1997, the "Flood of the Century" caused over in damages in Manitoba, but the floodway prevented Winnipeg from flooding. In 1990, Prime Minister Brian Mulroney attempted to pass the Meech Lake Accord, a series of constitutional amendments to persuade Quebec to endorse the Canada Act 1982. Unanimous support in the legislature was needed to bypass public consultation. Cree politician Elijah Harper opposed because he did not believe First Nations had been adequately involved in the Accord's process, and thus the Accord failed. Glen Murray, elected in Winnipeg in 1998, became the first openly gay mayor of a large North American city. The province was impacted by major flooding in 2009 and 2011. In 2004, Manitoba became the first province in Canada to ban indoor smoking in public places. In 2013, Manitoba was the second province to introduce accessibility legislation, protecting the rights of persons with disabilities. Geography Manitoba is bordered by the provinces of Ontario to the east and Saskatchewan to the west, the territory of Nunavut to the north, and the US states of North Dakota and Minnesota to the south. Manitoba is at the centre of the Hudson Bay drainage basin, with a high volume of the water draining into Lake Winnipeg and then north down the Nelson River into Hudson Bay. This basin's rivers reach far west to the mountains, far south into the United States, and east into Ontario. Major watercourses include the Red, Assiniboine, Nelson, Winnipeg, Hayes, Whiteshell and Churchill rivers. Most of Manitoba's inhabited south has developed in the prehistoric bed of Glacial Lake Agassiz. This region, particularly the Red River Valley, is flat and fertile; receding glaciers left hilly and rocky areas throughout the province. The province has a saltwater coastline bordering Hudson Bay and more than 110,000 lakes, covering approximately 15.6 percent or of its surface area. Manitoba's major lakes are Lake Manitoba, Lake Winnipegosis, and Lake Winnipeg, the tenth-largest freshwater lake in the world. A total of of traditional First Nations lands and boreal forest on Lake Winnipeg's east side were officially designated as a UNESCO World Heritage Site known as Pimachiowin Aki in 2018. Baldy Mountain is the province's highest point at above sea level, and the Hudson Bay coast is the lowest at sea level. Riding Mountain, the Pembina Hills, Sandilands Provincial Forest, and the Canadian Shield are also upland regions. Much of the province's sparsely inhabited north and east lie on the irregular granite Canadian Shield, including Whiteshell, Atikaki, and Nopiming Provincial Parks. Extensive agriculture is found only in the province's southern areas, although there is grain farming in the Carrot Valley Region (near The Pas). Around 11 percent of Canada's farmland is in Manitoba. Climate Manitoba has an extreme continental climate. Temperatures and precipitation generally decrease from south to north and increase from east to west. Manitoba is far from the moderating influences of mountain ranges or large bodies of water. Because of the generally flat landscape, it is exposed to cold Arctic high-pressure air masses from the northwest during January and February. In the summer, air masses sometimes come out of the Southern United States, as warm humid air is drawn northward from the Gulf of Mexico. Temperatures exceed numerous times each summer, and the combination of heat and humidity can bring the humidex value to the mid-40s. Carman, Manitoba, recorded the second-highest humidex ever in Canada in 2007, with 53.0. According to Environment Canada, Manitoba ranked first for clearest skies year round and ranked second for clearest skies in the summer and for the sunniest province in the winter and spring. Southern Manitoba (including the city of Winnipeg), falls into the humid continental climate zone (Köppen Dfb). This area is cold and windy in the winter and often has blizzards because of the open landscape. Summers are warm with a moderate length. This region is the most humid area in the prairie provinces, with moderate precipitation. Southwestern Manitoba, though under the same climate classification as the rest of Southern Manitoba, is closer to the semi-arid interior of Palliser's Triangle. The area is drier and more prone to droughts than other parts of southern Manitoba. This area is cold and windy in the winter and has frequent blizzards due to the openness of the Canadian Prairie landscape. Summers are generally warm to hot, with low to moderate humidity. Southern parts of the province, just north of Tornado Alley, experience tornadoes, with 16 confirmed touchdowns in 2016. In 2007, on 22 and 23 June, numerous tornadoes touched down, the largest an F5 tornado that devastated parts of Elie (the strongest recorded tornado in Canada). The province's northern sections (including the city of Thompson) fall in the subarctic climate zone (Köppen climate classification Dfc). This region features long and extremely cold winters and brief, warm summers with little precipitation. Overnight temperatures as low as occur on several days each winter. Flora and fauna Manitoba natural communities may be grouped within five ecozones: boreal plains, prairie, taiga shield, boreal shield and Hudson plains. Three of these—taiga shield, boreal shield and Hudson plain—contain part of the Boreal forest of Canada which covers the province's eastern, southeastern, and northern reaches. Forests make up about , or 48 percent, of the province's land area. The forests consist of pines (Jack Pine, Red Pine, Eastern White Pine), spruces (White Spruce, Black Spruce), Balsam Fir, Tamarack (larch), poplars (Trembling Aspen, Balsam Poplar), birches (White Birch, Swamp Birch) and small pockets of Eastern White Cedar. Two sections of the province are not dominated by forest. The province's northeast corner bordering Hudson Bay is above the treeline and is considered tundra. The tallgrass prairie once dominated the south central and southeastern parts including the Red River Valley. Mixed grass prairie is found in the southwestern region. Agriculture has replaced much of the natural prairie but prairie still can be found in parks and protected areas; some are notable for the presence of the endangered western prairie fringed orchid. Manitoba is especially noted for its northern polar bear population; Churchill is commonly referred to as the "Polar Bear Capital". Other large animals, including moose, white-tailed deer, black bears, cougars, lynx, and wolves, are common throughout the province, especially in the provincial and national parks. There is a large population of red sided garter snakes near Narcisse; the dens there are home to the world's largest concentration of snakes. Manitoba's bird diversity is enhanced by its position on two major migration routes, with 392 confirmed identified species; 287 of these nesting within the province. These include the great grey owl, the province's official bird, and the endangered peregrine falcon. Manitoba's lakes host 18 species of game fish, particularly species of trout, pike, and goldeye, as well as many smaller fish. Demography At the 2016 census, Manitoba had a population of 1,278,365, more than half of which is in the Winnipeg Capital Region (778,489 as of the 2016 Census). Although initial colonization of the province revolved mostly around homesteading, the last century has seen a shift towards urbanization; Manitoba is the only Canadian province with over fifty-five percent of its population in a single city. According to the 2006 Canadian census, the largest ethnic group in Manitoba is English (22.9%), followed by German (19.1%), Scottish (18.5%), Ukrainian (14.7%), Irish (13.4%), Indigenous (10.6%), Polish (7.3%), Métis (6.4%), French (5.6%), Dutch (4.9%), Russian (4.0%), and Icelandic (2.4%). Almost one-fifth of respondents also identified their ethnicity as "Canadian". Indigenous peoples (including Métis) are Manitoba's fastest-growing ethnic group, representing 13.6 percent of Manitoba's population as of 2001 (some reserves refused to allow census-takers to enumerate their populations or were otherwise incompletely counted). There is a significant Franco-Manitoban minority (148,370). Gimli, Manitoba is home to the largest Icelandic community outside of Iceland. Religion Most Manitobans belong to a Christian denomination: on the 2001 census, 758,760 Manitobans (68.7%) reported being Christian, followed by 13,040 (1.2%) Jewish, 5,745 (0.5%) Buddhist, 5,485 (0.5%) Sikh, 5,095 (0.5%) Muslim, 3,840 (0.3%) Hindu, 3,415 (0.3%) Indigenous spirituality and 995 (0.1%) pagan. 201,825 Manitobans (18.3%) reported no religious affiliation. The largest Christian denominations by number of adherents were the Roman Catholic Church with 292,970 (27%); the United Church of Canada with 176,820 (16%); and the Anglican Church of Canada with 85,890 (8%). Economy Manitoba has a moderately strong economy based largely on natural resources. Its Gross Domestic Product was C$50.834 billion in 2008. The province's economy grew 2.4 percent in 2008, the third consecutive year of growth. The average individual income in Manitoba in 2006 was C$25,100 (compared to a national average of C$26,500), ranking fifth-highest among the provinces. As of October 2009, Manitoba's unemployment rate was 5.8 percent. Manitoba's economy relies heavily on agriculture, tourism, electricity, | Manitoba, and eventually raised over C$65 million. Winnipeg was inundated during the 1950 Red River Flood and had to be partially evacuated. In that year, the Red River reached its highest level since 1861 and flooded most of the Red River Valley. The damage caused by the flood led then-Premier Duff Roblin to advocate for the construction of the Red River Floodway; it was completed in 1968 after six years of excavation. Permanent dikes were erected in eight towns south of Winnipeg, and clay dikes and diversion dams were built in the Winnipeg area. In 1997, the "Flood of the Century" caused over in damages in Manitoba, but the floodway prevented Winnipeg from flooding. In 1990, Prime Minister Brian Mulroney attempted to pass the Meech Lake Accord, a series of constitutional amendments to persuade Quebec to endorse the Canada Act 1982. Unanimous support in the legislature was needed to bypass public consultation. Cree politician Elijah Harper opposed because he did not believe First Nations had been adequately involved in the Accord's process, and thus the Accord failed. Glen Murray, elected in Winnipeg in 1998, became the first openly gay mayor of a large North American city. The province was impacted by major flooding in 2009 and 2011. In 2004, Manitoba became the first province in Canada to ban indoor smoking in public places. In 2013, Manitoba was the second province to introduce accessibility legislation, protecting the rights of persons with disabilities. Geography Manitoba is bordered by the provinces of Ontario to the east and Saskatchewan to the west, the territory of Nunavut to the north, and the US states of North Dakota and Minnesota to the south. Manitoba is at the centre of the Hudson Bay drainage basin, with a high volume of the water draining into Lake Winnipeg and then north down the Nelson River into Hudson Bay. This basin's rivers reach far west to the mountains, far south into the United States, and east into Ontario. Major watercourses include the Red, Assiniboine, Nelson, Winnipeg, Hayes, Whiteshell and Churchill rivers. Most of Manitoba's inhabited south has developed in the prehistoric bed of Glacial Lake Agassiz. This region, particularly the Red River Valley, is flat and fertile; receding glaciers left hilly and rocky areas throughout the province. The province has a saltwater coastline bordering Hudson Bay and more than 110,000 lakes, covering approximately 15.6 percent or of its surface area. Manitoba's major lakes are Lake Manitoba, Lake Winnipegosis, and Lake Winnipeg, the tenth-largest freshwater lake in the world. A total of of traditional First Nations lands and boreal forest on Lake Winnipeg's east side were officially designated as a UNESCO World Heritage Site known as Pimachiowin Aki in 2018. Baldy Mountain is the province's highest point at above sea level, and the Hudson Bay coast is the lowest at sea level. Riding Mountain, the Pembina Hills, Sandilands Provincial Forest, and the Canadian Shield are also upland regions. Much of the province's sparsely inhabited north and east lie on the irregular granite Canadian Shield, including Whiteshell, Atikaki, and Nopiming Provincial Parks. Extensive agriculture is found only in the province's southern areas, although there is grain farming in the Carrot Valley Region (near The Pas). Around 11 percent of Canada's farmland is in Manitoba. Climate Manitoba has an extreme continental climate. Temperatures and precipitation generally decrease from south to north and increase from east to west. Manitoba is far from the moderating influences of mountain ranges or large bodies of water. Because of the generally flat landscape, it is exposed to cold Arctic high-pressure air masses from the northwest during January and February. In the summer, air masses sometimes come out of the Southern United States, as warm humid air is drawn northward from the Gulf of Mexico. Temperatures exceed numerous times each summer, and the combination of heat and humidity can bring the humidex value to the mid-40s. Carman, Manitoba, recorded the second-highest humidex ever in Canada in 2007, with 53.0. According to Environment Canada, Manitoba ranked first for clearest skies year round and ranked second for clearest skies in the summer and for the sunniest province in the winter and spring. Southern Manitoba (including the city of Winnipeg), falls into the humid continental climate zone (Köppen Dfb). This area is cold and windy in the winter and often has blizzards because of the open landscape. Summers are warm with a moderate length. This region is the most humid area in the prairie provinces, with moderate precipitation. Southwestern Manitoba, though under the same climate classification as the rest of Southern Manitoba, is closer to the semi-arid interior of Palliser's Triangle. The area is drier and more prone to droughts than other parts of southern Manitoba. This area is cold and windy in the winter and has frequent blizzards due to the openness of the Canadian Prairie landscape. Summers are generally warm to hot, with low to moderate humidity. Southern parts of the province, just north of Tornado Alley, experience tornadoes, with 16 confirmed touchdowns in 2016. In 2007, on 22 and 23 June, numerous tornadoes touched down, the largest an F5 tornado that devastated parts of Elie (the strongest recorded tornado in Canada). The province's northern sections (including the city of Thompson) fall in the subarctic climate zone (Köppen climate classification Dfc). This region features long and extremely cold winters and brief, warm summers with little precipitation. Overnight temperatures as low as occur on several days each winter. Flora and fauna Manitoba natural communities may be grouped within five ecozones: boreal plains, prairie, taiga shield, boreal shield and Hudson plains. Three of these—taiga shield, boreal shield and Hudson plain—contain part of the Boreal forest of Canada which covers the province's eastern, southeastern, and northern reaches. Forests make up about , or 48 percent, of the province's land area. The forests consist of pines (Jack Pine, Red Pine, Eastern White Pine), spruces (White Spruce, Black Spruce), Balsam Fir, Tamarack (larch), poplars (Trembling Aspen, Balsam Poplar), birches (White Birch, Swamp Birch) and small pockets of Eastern White Cedar. Two sections of the province are not dominated by forest. The province's northeast corner bordering Hudson Bay is above the treeline and is considered tundra. The tallgrass prairie once dominated the south central and southeastern parts including the Red River Valley. Mixed grass prairie is found in the southwestern region. Agriculture has replaced much of the natural prairie but prairie still can be found in parks and protected areas; some are notable for the presence of the endangered western prairie fringed orchid. Manitoba is especially noted for its northern polar bear population; Churchill is commonly referred to as the "Polar Bear Capital". Other large animals, including moose, white-tailed deer, black bears, cougars, lynx, and wolves, are common throughout the province, especially in the provincial and national parks. There is a large population of red sided garter snakes near Narcisse; the dens there are home to the world's largest concentration of snakes. Manitoba's bird diversity is enhanced by its position on two major migration routes, with 392 confirmed identified species; 287 of these nesting within the province. These include the great grey owl, the province's official bird, and the endangered peregrine falcon. Manitoba's lakes host 18 species of game fish, particularly species of trout, pike, and goldeye, as well as many smaller fish. Demography At the 2016 census, Manitoba had a population of 1,278,365, more than half of which is in the Winnipeg Capital Region (778,489 as of the 2016 Census). Although initial colonization of the province revolved mostly around homesteading, the last century has seen a shift towards urbanization; Manitoba is the only Canadian province with over fifty-five percent of its population in a single city. According to the 2006 Canadian census, the largest ethnic group in Manitoba is English (22.9%), followed by German (19.1%), Scottish (18.5%), Ukrainian (14.7%), Irish (13.4%), Indigenous (10.6%), Polish (7.3%), Métis (6.4%), French (5.6%), Dutch (4.9%), Russian (4.0%), and Icelandic (2.4%). Almost one-fifth of respondents also identified their ethnicity as "Canadian". Indigenous peoples (including Métis) are Manitoba's fastest-growing ethnic group, representing 13.6 percent of Manitoba's population as of 2001 (some reserves refused to allow census-takers to enumerate their populations or were otherwise incompletely counted). There is a significant Franco-Manitoban minority (148,370). Gimli, Manitoba is home to the largest Icelandic community outside of Iceland. Religion Most Manitobans belong to a Christian denomination: on the 2001 census, 758,760 Manitobans (68.7%) reported being Christian, followed by 13,040 (1.2%) Jewish, 5,745 (0.5%) Buddhist, 5,485 (0.5%) Sikh, 5,095 (0.5%) Muslim, 3,840 (0.3%) Hindu, 3,415 (0.3%) Indigenous spirituality and 995 (0.1%) pagan. 201,825 Manitobans (18.3%) reported no religious affiliation. The largest Christian denominations by number of adherents were the Roman Catholic Church with 292,970 (27%); the United Church of Canada with 176,820 (16%); and the Anglican Church of Canada with 85,890 (8%). Economy Manitoba has a moderately strong economy based largely on natural resources. Its Gross Domestic Product was C$50.834 billion in 2008. The province's economy grew 2.4 percent in 2008, the third consecutive year of growth. The average individual income in Manitoba in 2006 was C$25,100 (compared to a national average of C$26,500), ranking fifth-highest among the provinces. As of October 2009, Manitoba's unemployment rate was 5.8 percent. Manitoba's economy relies heavily on agriculture, tourism, electricity, oil, mining, and forestry. Agriculture is vital and is found mostly in the southern half of the province, although grain farming occurs as far north as The Pas. The most common agricultural activity is cattle husbandry, followed by assorted grains and oilseed. Manitoba is the nation's largest producer of sunflower seed and dry beans, and one of the leading sources of potatoes. Portage la Prairie is a major potato processing centre. Richardson International, one of the largest oat mills in the world, also has a plant in the municipality. Manitoba's largest employers are government and government-funded institutions, including crown corporations and services like hospitals and universities. Major private-sector employers are The Great-West Life Assurance Company, Cargill Ltd., and Richardson International. Manitoba also has large manufacturing and tourism sectors. Churchill's Arctic wildlife is a major tourist attraction; the town is a world capital for polar bear and beluga whale watchers. Manitoba is the only province with an Arctic deep-water seaport, at Churchill. In January 2018, the Canadian Federation of Independent Business claimed Manitoba was the most improved province for tackling red tape. Economic history Manitoba's early economy depended on mobility and living off the land. Indigenous Nations (Cree, Ojibwa, Dene, Sioux and Assiniboine) followed herds of bison and congregated to trade among themselves at key meeting places throughout the province. After the arrival of the first European traders in the 17th century, the economy centred on the trade of beaver pelts and other furs. Diversification of the economy came when Lord Selkirk brought the first agricultural settlers in 1811, though the triumph of the Hudson's Bay Company (HBC) over its competitors ensured the primacy of the fur trade over widespread agricultural colonization. HBC control of Rupert's Land ended in 1868; when Manitoba became a province in 1870, all land became the property of the federal government, with homesteads granted to settlers for farming. Transcontinental railways were constructed to simplify trade. Manitoba's economy depended mainly on farming, which persisted until drought and the Great Depression led to further diversification. Military bases CFB Winnipeg is a Canadian Forces Base at the Winnipeg International Airport. The base is home to flight operations support divisions and several training schools, as well as the 1 Canadian Air Division and Canadian NORAD Region Headquarters. 17 Wing of the Canadian Forces is based at CFB Winnipeg; the Wing has three squadrons and six schools. It supports 113 units from Thunder Bay to the Saskatchewan/Alberta border, and from the 49th parallel north to the high Arctic. 17 Wing acts as a deployed operating base for CF-18 Hornet fighter–bombers assigned to the Canadian NORAD Region. The two 17 Wing squadrons based in the city are: the 402 ("City of Winnipeg" Squadron), which flies the Canadian designed and produced de Havilland Canada CT-142 Dash 8 navigation trainer in support of the 1 Canadian Forces Flight Training School's Air Combat Systems Officer and Airborne Electronic Sensor Operator training programs (which trains all Canadian Air Combat Systems Officer); and the 435 ("Chinthe" Transport and Rescue Squadron), which flies the Lockheed C-130 Hercules tanker/transport in airlift search and rescue roles, and is the only Air Force squadron equipped and trained to conduct air-to-air refuelling of fighter aircraft. Canadian Forces Base Shilo (CFB Shilo) is an Operations and Training base of the Canadian Forces east of Brandon. During the 1990s, Canadian Forces Base Shilo was designated as an Area Support Unit, acting as a local base of operations for Southwest Manitoba in times of military and civil emergency. CFB Shilo is the home of the 1st Regiment, Royal Canadian Horse Artillery, both battalions of the 1 Canadian Mechanized Brigade Group, and the Royal Canadian Artillery. The Second Battalion of Princess Patricia's Canadian Light Infantry (2 PPCLI), which was originally stationed in Winnipeg (first at Fort Osborne, then in Kapyong Barracks), has operated out of CFB Shilo since 2004. CFB Shilo hosts a training unit, 3rd Canadian Division Training Centre. It serves as a base for support units of 3rd Canadian Division, also including 3 CDSG Signals Squadron, Shared Services Unit (West), 11 CF Health Services Centre, 1 Dental Unit, 1 Military Police Regiment, and an Integrated Personnel Support Centre. The base houses 1,700 soldiers. Government and politics After the control of Rupert's Land was passed from Great Britain to the Government of Canada in 1869, Manitoba attained full-fledged rights and responsibilities of self-government as the first Canadian province carved out of the Northwest Territories. The Legislative Assembly of Manitoba was established on 14 July 1870. Political parties first emerged between 1878 and 1883, with a two-party system (Liberals and Conservatives). The United Farmers of Manitoba appeared in 1922, and later merged with the Liberals in 1932. Other parties, including the Co-operative Commonwealth Federation (CCF), appeared during the Great Depression; in the 1950s, Manitoban politics became a three-party system, and the Liberals gradually declined in power. The CCF became the New Democratic Party of Manitoba (NDP), which came to power in 1969. Since then, the Progressive Conservatives and the NDP have been the dominant parties. Like all Canadian provinces, Manitoba is governed by a unicameral legislative assembly. The executive branch is formed by the governing party; the party leader is the premier of Manitoba, the head of the executive branch. The head of state, Queen Elizabeth II, is represented by the Lieutenant Governor of Manitoba, who is appointed by the Governor General of Canada on advice of the Prime Minister. The head of state is primarily a ceremonial role, although the Lieutenant Governor has the official responsibility of ensuring Manitoba has a duly constituted government. The Legislative Assembly consists of the 57 Members elected to represent the people of Manitoba. The premier of Manitoba is Heather Stefanson of the PC Party, after Brian Pallister's resignation. The province is represented in federal politics by 14 Members of Parliament and six Senators. Manitoba's judiciary consists of the Court of Appeal, the Court of Queen's Bench, and the Provincial Court. The Provincial Court is primarily for criminal law; 95 per cent of criminal cases in Manitoba are heard here. The Court of Queen's Bench is the highest trial court in the province. It has four jurisdictions: family law (child and family services cases), civil law, criminal law (for indictable offences), and appeals. The Court of Appeal hears appeals from both benches; its decisions can only be appealed to the Supreme Court of Canada. Official languages Both English and French are official languages of the legislature and courts of Manitoba, according to section 23 of the |
(many shield volcanoes are much larger in size and mass), including a massif with eleven peaks over . Due to active tectonic uplifting, Mount Logan is still rising in height (approximately 0.35 mm per year). Before 1992, the exact elevation of Mount Logan was unknown and measurements ranged from . In May 1992, a GSC expedition climbed Mount Logan and fixed the current height of using GPS. Temperatures are extremely low on and near Mount Logan. On the plateau, air temperature hovers around in the winter and reaches near freezing in summer with the median temperature for the year around . Minimal snow melt leads to a significant ice cap, reaching almost in certain spots. Peaks of the massif The Mount Logan massif is considered to contain all the surrounding peaks with less than of prominence, as listed below: Discovery and naming Mount Logan is not readily visible from the surrounding lowlands or the coast, due to its position in the heart of the Saint Elias Mountains, although it can be seen from out to sea. Its first reported sighting was in 1890 by Israel C. Russell, during an expedition to nearby Mount Saint Elias, from the crest of the Pinnacle Pass Hills (). He wrote: "The clouds parting toward the northeast revealed several giant peaks not before seen... One stranger, rising in three white domes far above the clouds, was especially magnificent". Russell gave the mountain its present name. In 1894 Mount Logan's elevation was determined to be about , making it the highest known peak in North America at the time. In 1898 Denali was determined to be higher. Ascent attempts First ascent In 1922, a geologist approached the Alpine Club of Canada with the suggestion that the club send a team to the mountain to reach the summit for the first time. An international team of Canadian, British and American climbers was assembled and initially they had planned their attempt in 1924 but funding and preparation delays postponed the trip until 1925. The international team of climbers began their journey in early May, crossing the mainland from the Pacific coast by train. They then walked the remaining to within of the Logan Glacier where they established base camp. In the early evening of June 23, 1925, Albert H. MacCarthy (leader), H.F. Lambart, Allen Carpé, W.W. Foster, Norman H. Read and Andy Taylor stood on top for the first time. It had taken them 65 days to approach the mountain from the nearest town, McCarthy, summit and return, with all climbers intact, although some of them suffered severe frostbite. Subsequent notable ascents and attempts 1957 East Ridge. Don Monk, Gil Roberts and three others (US) reached the East Peak on July 19 after a 24-day climb. 1959 East Ridge, second ascent and first alpine-style ascent, Hans Gmoser and five others (Canada). Starting from Kluane Lake, they hiked and skied to reach the base of the mountain. They climbed the ridge in six days and summited the East Peak on June 12. 1965 Hummingbird Ridge (South Ridge). Dick Long, Allen Steck, Jim Wilson, John Evans, Franklin Coale Sr. and Paul Bacon | in North America after Denali. The mountain was named after Sir William Edmond Logan, a Canadian geologist and founder of the Geological Survey of Canada (GSC). Mount Logan is located within Kluane National Park Reserve in southwestern Yukon, less than north of the Yukon–Alaska border. Mount Logan is the source of the Hubbard and Logan glaciers. Logan is believed to have the largest base circumference of any non-volcanic mountain on Earth (many shield volcanoes are much larger in size and mass), including a massif with eleven peaks over . Due to active tectonic uplifting, Mount Logan is still rising in height (approximately 0.35 mm per year). Before 1992, the exact elevation of Mount Logan was unknown and measurements ranged from . In May 1992, a GSC expedition climbed Mount Logan and fixed the current height of using GPS. Temperatures are extremely low on and near Mount Logan. On the plateau, air temperature hovers around in the winter and reaches near freezing in summer with the median temperature for the year around . Minimal snow melt leads to a significant ice cap, reaching almost in certain spots. Peaks of the massif The Mount Logan massif is considered to contain all the surrounding peaks with less than of prominence, as listed below: Discovery and naming Mount Logan is not readily visible from the surrounding lowlands or the coast, due to its position in the heart of the Saint Elias Mountains, although it can be seen from out to sea. Its first reported sighting was in 1890 by Israel C. Russell, during an expedition to nearby Mount Saint Elias, from the crest of the Pinnacle Pass Hills (). He wrote: "The clouds parting toward the northeast revealed several giant peaks not before seen... One stranger, rising in three white domes far above the clouds, was especially magnificent". Russell gave the mountain its present name. In 1894 Mount Logan's elevation was determined to be about , making it the highest known peak in North America at the time. In 1898 Denali was determined to be higher. Ascent attempts First ascent In 1922, a geologist approached the Alpine Club of Canada with the suggestion that the club send a team to the mountain to reach the summit for the first time. An international team of Canadian, British and American climbers was assembled and initially they had planned their attempt in 1924 but funding and preparation delays postponed the trip until 1925. The international team of climbers began their journey in early May, crossing the mainland from the Pacific coast by train. They then walked the remaining to within of the Logan Glacier where they established base camp. In the early evening of June 23, 1925, Albert H. MacCarthy (leader), H.F. Lambart, Allen Carpé, W.W. Foster, Norman H. Read and Andy Taylor stood on top for the first time. It had taken them 65 days to approach the mountain from the nearest town, McCarthy, summit and return, with all climbers intact, although some of them suffered severe frostbite. Subsequent notable ascents and attempts 1957 East Ridge. Don Monk, Gil Roberts and three others (US) reached the East Peak on July 19 after a 24-day climb. 1959 East Ridge, second ascent |
or simple kernel of authentic individuality, but rather, it is "...the bifurcation of the simple; it is the doubling which sets up opposition, and then again the negation of this indifferent diversity and of its anti-thesis" (Preface, para. 18). The Hegelian subject's modus operandi is therefore cutting, splitting and introducing distinctions by injecting negation into the flow of sense-perceptions. Subjectivity is thus a kind of structural effect – what happens when Nature is diffused, refracted around a field of negativity and the "unity of the subject" for Hegel, is in fact a second-order effect, a "negation of negation". The subject experiences itself as a unity only by purposively negating the very diversity it itself had produced. The Hegelian subject may therefore be characterized either as "self-restoring sameness" or else as "reflection in otherness within itself" (Preface, para. 18). Continental philosophy The thinking of Karl Marx and Sigmund Freud provided a point of departure for questioning the notion of a unitary, autonomous Subject, which for many thinkers in the Continental tradition is seen as the foundation of the liberal theory of the social contract. These thinkers opened up the way for the deconstruction of the subject as a core-concept of metaphysics. Freud's explorations of the unconscious mind added up to a wholesale indictment of Enlightenment notions of subjectivity. Among the most radical re-thinkers of human self-consciousness was Martin Heidegger, whose concept of Dasein or "Being-there" displaces traditional notions of the personal subject altogether. With Heidegger, phenomenology tries to go beyond the classical dichotomy between subject and object, because they are linked by an inseparable and original relationship, in the sense that there can be no world without a subject, nor the subject without world. Jacques Lacan, inspired by Heidegger and Ferdinand de Saussure, built on Freud's psychoanalytic model of the subject, in which the split subject is constituted by a double bind: alienated from jouissance when they leave the Real, enters into the Imaginary (during the mirror stage), and separates from the Other when they come into the realm of language, difference, and demand in the Symbolic or the Name of the Father. Thinkers such as structural Marxist Louis Althusser and poststructuralist Michel Foucault theorize the subject as a social construction, the so-called poststructuralist subject. According to Althusser, the "subject" is an ideological construction (more exactly, constructed by the "Ideological State Apparatuses"). One's subjectivity exists, "always already" and is discovered through the process of interpellation. Ideology inaugurates one into being a subject, and every ideology is intended to maintain and glorify its idealized subject, as well as the metaphysical category of the subject itself (see antihumanism). According to Foucault, it is the "effect" of power and "disciplines" (see Discipline and Punish: construction of the subject (subjectivation or subjectification, ) as student, soldier, "criminal", etc.)). Foucault believed it was possible to transform oneself; he used the word ethopoiein from the word ethos to describe the process. Subjectification was a central concept in Gilles Deleuze and Félix Guattari's work as well. Analytic philosophy In contemporary analytic philosophy, the | his Preface to the Phenomenology of Spirit that a subject is constituted by "the process of reflectively mediating itself with itself." Hegel begins his definition of the subject at a standpoint derived from Aristotelian physics: "the unmoved which is also self-moving" (Preface, para. 22). That is, what is not moved by an outside force, but which propels itself, has a prima facie case for subjectivity. Hegel's next step, however, is to identify this power to move, this unrest that is the subject, as pure negativity. Subjective self-motion, for Hegel, comes not from any pure or simple kernel of authentic individuality, but rather, it is "...the bifurcation of the simple; it is the doubling which sets up opposition, and then again the negation of this indifferent diversity and of its anti-thesis" (Preface, para. 18). The Hegelian subject's modus operandi is therefore cutting, splitting and introducing distinctions by injecting negation into the flow of sense-perceptions. Subjectivity is thus a kind of structural effect – what happens when Nature is diffused, refracted around a field of negativity and the "unity of the subject" for Hegel, is in fact a second-order effect, a "negation of negation". The subject experiences itself as a unity only by purposively negating the very diversity it itself had produced. The Hegelian subject may therefore be characterized either as "self-restoring sameness" or else as "reflection in otherness within itself" (Preface, para. 18). Continental philosophy The thinking of Karl Marx and Sigmund Freud provided a point of departure for questioning the notion of a unitary, autonomous Subject, which for many thinkers in the Continental tradition is seen as the foundation of the liberal theory of the social contract. These thinkers opened up the way for the deconstruction of the subject as a core-concept of metaphysics. Freud's explorations of the unconscious mind added up to a wholesale indictment of Enlightenment notions of subjectivity. Among the most radical re-thinkers of human self-consciousness was Martin Heidegger, whose concept of Dasein or "Being-there" displaces traditional notions of the personal subject altogether. With Heidegger, phenomenology tries to go beyond the classical dichotomy between subject and object, because they are linked by an inseparable and original relationship, in the sense that there can be no world without a subject, nor the subject without world. Jacques Lacan, inspired by Heidegger and Ferdinand de Saussure, built on Freud's psychoanalytic model of the subject, in which the split subject is constituted by a double bind: alienated from jouissance when they leave the Real, enters into the Imaginary (during the mirror stage), and separates from the Other when they come into the realm of language, difference, and demand in the Symbolic or the Name of the Father. Thinkers such as structural Marxist Louis Althusser and poststructuralist Michel Foucault theorize the subject as a social construction, the so-called poststructuralist subject. According to Althusser, the "subject" is an ideological construction (more exactly, constructed by the "Ideological State Apparatuses"). One's subjectivity exists, "always already" and is discovered through the process of interpellation. Ideology inaugurates one into being a subject, and every ideology is intended to maintain and glorify its idealized subject, as well as the metaphysical category of the subject itself (see |
workers as pejorative, despite the word being used by strikers themselves. Herman and Chomsky (1988) proposed a propaganda model hypothesizing systematic biases of U.S. media from structural economic causes. They hypothesize media ownership by corporations, funding from advertising, the use of official sources, efforts to discredit independent media ("flak"), and "anti-communist" ideology as the filters that bias news in favor of U.S. corporate interests. Many of the positions in the preceding study are supported by a 2002 study by Jim A. Kuypers: Press Bias and Politics: How the Media Frame Controversial Issues. In this study of 116 mainstream US papers, including The New York Times, the Washington Post, Los Angeles Times, and the San Francisco Chronicle, Kuypers found that the mainstream print press in America operate within a narrow range of liberal beliefs. Those who expressed points of view further to the left were generally ignored, whereas those who expressed moderate or conservative points of view were often actively denigrated or labeled as holding a minority point of view. In short, political leaders, regardless of party, speaking within the press-supported range of acceptable discourse receive positive press coverage. Politicians, again regardless of party, speaking outside of this range are likely to receive negative press or be ignored. Kuypers also found that the liberal points of view expressed in editorial and opinion pages were found in hard news coverage of the same issues. Although focusing primarily on the issues of race and homosexuality, Kuypers found that the press injected opinion into its news coverage of other issues such as welfare reform, environmental protection, and gun control; in all, cases favoring a liberal point of view. Henry Silverman (2011) of Roosevelt University analyzed a sample of fifty news-oriented articles on the Middle East conflict published on the Reuters.com websites for the use of classic propaganda techniques, logical fallacies and violations of the Reuters Handbook of Journalism, a manual of guiding ethical principles for the company's journalists. Across the articles, over 1,100 occurrences of propaganda, fallacies and handbook violations in 41 categories were identified and classified. In the second part of the study, a group of thirty-three university students were surveyed, before and after reading the articles, to assess their attitudes and motivation to support one or the other belligerent parties in the Middle East conflict, i.e., the Palestinians/Arabs or the Israelis. The study found that on average, subject sentiment shifted significantly following the readings in favor of the Arabs and that this shift was associated with particular propaganda techniques and logical fallacies appearing in the stories. Silverman inferred from the evidence that Reuters engages in systematically biased storytelling in favor of the Arabs/Palestinians and is able to influence audience affective behavior and motivate direct action along the same trajectory. Studies reporting perceptions of bias in the media are not limited to studies of print media. A joint study by the Joan Shorenstein Center on Press, Politics and Public Policy at Harvard University and the Project for Excellence in Journalism found that people see media bias in television news media such as CNN. Although both CNN and Fox were perceived in the study as not being centrist, CNN was perceived as being more liberal than Fox. Moreover, the study's findings concerning CNN's perceived bias are echoed in other studies. There is also a growing economics literature on mass media bias, both on the theoretical and the empirical side. On the theoretical side the focus is on understanding to what extent the political positioning of mass media outlets is mainly driven by demand or supply factors. This literature is surveyed by Andrea Prat of Columbia University and David Stromberg of Stockholm University. According to Dan Sutter of the University of Oklahoma, a systematic liberal bias in the U.S. media could depend on the fact that owners and/or journalists typically lean to the left. Along the same lines, David Baron of Stanford GSB presents a game-theoretic model of mass media behaviour in which, given that the pool of journalists systematically leans towards the left or the right, mass media outlets maximise their profits by providing content that is biased in the same direction. They can do so, because it is cheaper to hire journalists who write stories that are consistent with their political position. A concurrent theory would be that supply and demand would cause media to attain a neutral balance because consumers would of course gravitate towards the media they agreed with. This argument fails in considering the imbalance in self-reported political allegiances by journalists themselves, that distort any market analogy as regards offer: (..) Indeed, in 1982, 85 percent of Columbia Graduate School of Journalism students identified themselves as liberal, versus 11 percent conservative" (Lichter, Rothman, and Lichter 1986: 48), quoted in Sutter, 2001. This same argument would have news outlets in equal numbers increasing profits of a more balanced media far more than the slight increase in costs to hire unbiased journalists, notwithstanding the extreme rarity of self-reported conservative journalists (Sutton, 2001). As mentioned above, Tim Groseclose of UCLA and Jeff Milyo of the University of Missouri at Columbia use think tank quotes, in order to estimate the relative position of mass media outlets in the political spectrum. The idea is to trace out which think tanks are quoted by various mass media outlets within news stories, and to match these think tanks with the political position of members of the U.S. Congress who quote them in a non-negative way. Using this procedure, Groseclose and Milyo obtain the stark result that all sampled news providers -except Fox News' Special Report and the Washington Times- are located to the left of the average Congress member, i.e. there are signs of a liberal bias in the US news media. The methods Groseclose and Milyo used to calculate this bias have been criticized by Mark Liberman, a professor of Linguistics at the University of Pennsylvania. Liberman concludes by saying he thinks "that many if not most of the complaints directed against G&M are motivated in part by ideological disagreement – just as much of the praise for their work is motivated by ideological agreement. It would be nice if there were a less politically fraught body of data on which such modeling exercises could be explored." Sendhil Mullainathan and Andrei Shleifer of Harvard University construct a behavioural model, which is built around the assumption that readers and viewers hold beliefs that they would like to see confirmed by news providers. When news customers share common beliefs, profit-maximizing media outlets find it optimal to select and/or frame stories in order to pander to those beliefs. On the other hand, when beliefs are heterogeneous, news providers differentiate their offer and segment the market, by providing news stories that are slanted towards the two extreme positions in the spectrum of beliefs. Matthew Gentzkow and Jesse Shapiro of Chicago GSB present another demand-driven theory of mass media bias. If readers and viewers have a priori views on the current state of affairs and are uncertain about the quality of the information about it being provided by media outlets, then the latter have an incentive to slant stories towards their customers' prior beliefs, in order to build and keep a reputation for high-quality journalism. The reason for this is that rational agents would tend to believe that pieces of information that go against their prior beliefs in fact originate from low-quality news providers. Given that different groups in society have different beliefs, priorities, and interests, to which group would the media tailor its bias? David Stromberg constructs a demand-driven model where media bias arises because different audiences have different effects on media profits. Advertisers pay more for affluent audiences and media may tailor content to attract this audience, perhaps producing a right-wing bias. On the other hand, urban audiences are more profitable to newspapers because of lower delivery costs. Newspapers may for this reason tailor their content to attract the profitable predominantly liberal urban audiences. Finally, because of the increasing returns to scale in news production, small groups such as minorities are less profitable. This biases media content against the interest of minorities. Steve Ansolabehere, Rebecca Lessem and Jim Snyder of the Massachusetts Institute of Technology analyze the political orientation of endorsements by U.S. newspapers. They find an upward trend in the average propensity to endorse a candidate, and in particular an incumbent one. There are also some changes in the average ideological slant of endorsements: while in the 1940s and in the 1950s there was a clear advantage to Republican candidates, this advantage continuously eroded in subsequent decades, to the extent that in the 1990s the authors find a slight Democratic lead in the average endorsement choice. John Lott and Kevin Hassett of the American Enterprise Institute study the coverage of economic news by looking at a panel of 389 U.S. newspapers from 1991 to 2004, and from 1985 to 2004 for a subsample comprising the top 10 newspapers and the Associated Press. For each release of official data about a set of economic indicators, the authors analyze how newspapers decide to report on them, as reflected by the tone of the related headlines. The idea is to check whether newspapers display some kind of partisan bias, by giving more positive or negative coverage to the same economic figure, as a function of the political affiliation of the incumbent president. Controlling for the economic data being released, the authors find that there are between 9.6 and 14.7 percent fewer positive stories when the incumbent president is a Republican. Riccardo Puglisi of the Massachusetts Institute of Technology looks at the editorial choices of the New York Times from 1946 to 1997. He finds that the Times displays Democratic partisanship, with some watchdog aspects. This is the case, because during presidential campaigns the Times systematically gives more coverage to Democratic topics of civil rights, health care, labor and social welfare, but only when the incumbent president is a Republican. These topics are classified as Democratic ones, because Gallup polls show that on average U.S. citizens think that Democratic candidates would be better at handling problems related to them. According to Puglisi, in the post-1960 period the Times displays a more symmetric type of watchdog behaviour, just because during presidential campaigns it also gives more coverage to the typically Republican issue of defense when the incumbent president is a Democrat, and less so when the incumbent is a Republican. Alan Gerber and Dean Karlan of Yale University use an experimental approach to examine not whether the media are biased, but whether the media influence political decisions and attitudes. They conduct a randomized control trial just prior to the November 2005 gubernatorial election in Virginia and randomly assign individuals in Northern Virginia to (a) a treatment group that receives a free subscription to the Washington Post, (b) a treatment group that receives a free subscription to the Washington Times, or (c) a control group. They find that those who are assigned to the Washington Post treatment group are eight percentage points more likely to vote for the Democrat in the elections. The report also found that "exposure to either newspaper was weakly linked to a movement away from the Bush administration and Republicans." A self-described "progressive" media watchdog group, Fairness and Accuracy in Reporting (FAIR), in consultation with the Survey and Evaluation Research Laboratory at Virginia Commonwealth University, sponsored a 1998 | in 2014, media communication researcher Jim A. Kuypers published a 40-year longitudinal, aggregate study of the political beliefs and actions of American journalists. In every single category, for instance, social, economic, unions, health care, and foreign policy, he found that nationwide, print and broadcast journalists and editors as a group were "considerably" to the political left of the majority of Americans, and that these political beliefs found their way into news stories. Kuypers concluded, "Do the political proclivities of journalists influence their interpretation of the news? I answer that with a resounding, yes. As part of my evidence, I consider testimony from journalists themselves. ... [A] solid majority of journalists do allow their political ideology to influence their reporting." Jonathan M. Ladd, who has conducted intensive studies of media trust and media bias, concluded that the primary cause of belief in media bias is media telling their audience that particular media are biased. People who are told that a medium is biased tend to believe that it is biased, and this belief is unrelated to whether that medium is actually biased or not. The only other factor with as strong an influence on belief that media is biased is extensive coverage of celebrities. A majority of people see such media as biased, while at the same time preferring media with extensive coverage of celebrities. Starting in 2017, the Knight Foundation and Gallup conducted research to try to understand the effect of reader bias on the reader's perception of news source bias. They created the NewsLens site to present news from a variety of sources without labeling where the article came from. Their research showed that those with more extreme political views tend to provide more biased ratings of news. NewsLens became generally available in 2020, with the goals of expanding on the research and helping the US public to read and share news with less bias. However, , the platform was closed. Efforts to correct bias A technique used to avoid bias is the "point/counterpoint" or "round table", an adversarial format in which representatives of opposing views comment on an issue. This approach theoretically allows diverse views to appear in the media. However, the person organizing the report still has the responsibility to choose reporters or journalists that represent a diverse or balanced set of opinions, to ask them non-prejudicial questions, and to edit or arbitrate their comments fairly. When done carelessly, a point/counterpoint can be as unfair as a simple biased report, by suggesting that the "losing" side lost on its merits. Besides these challenges, exposing news consumers to differing viewpoints seems to be beneficial for a balanced understanding and more critical assessment of current events and latent topics. Using this format can also lead to accusations that the reporter has created a misleading appearance that viewpoints have equal validity (sometimes called "false balance"). This may happen when a taboo exists around one of the viewpoints, or when one of the representatives habitually makes claims that are easily shown to be inaccurate. One such allegation of misleading balance came from Mark Halperin, political director of ABC News. He stated in an internal e-mail message that reporters should not "artificially hold George W. Bush and John Kerry 'equally' accountable" to the public interest, and that complaints from Bush supporters were an attempt to "get away with ... renewed efforts to win the election by destroying Senator Kerry." When the conservative web site the Drudge Report published this message, many Bush supporters viewed it as "smoking gun" evidence that Halperin was using ABC to propagandize against Bush to Kerry's benefit, by interfering with reporters' attempts to avoid bias. An academic content analysis of election news later found that coverage at ABC, CBS, and NBC was more favorable toward Kerry than Bush, while coverage at Fox News Channel was more favorable toward Bush. Scott Norvell, the London bureau chief for Fox News, stated in a May 20, 2005 interview with the Wall Street Journal that: "Even we at Fox News manage to get some lefties on the air occasionally, and often let them finish their sentences before we club them to death and feed the scraps to Karl Rove and Bill O'Reilly. And those who hate us can take solace in the fact that they aren't subsidizing Bill's bombast; we payers of the BBC license fee don't enjoy that peace of mind. Fox News is, after all, a private channel and our presenters are quite open about where they stand on particular stories. That's our appeal. People watch us because they know what they are getting. The Beeb's (British Broadcasting Corporation) (BBC) institutionalized leftism would be easier to tolerate if the corporation was a little more honest about it". Another technique used to avoid bias is disclosure of affiliations that may be considered a possible conflict of interest. This is especially apparent when a news organization is reporting a story with some relevancy to the news organization itself or to its ownership individuals or conglomerate. Often this disclosure is mandated by the laws or regulations pertaining to stocks and securities. Commentators on news stories involving stocks are often required to disclose any ownership interest in those corporations or in its competitors. In rare cases, a news organization may dismiss or reassign staff members who appear biased. This approach was used in the Killian documents affair and after Peter Arnett's interview with the Iraqi press. This approach is presumed to have been employed in the case of Dan Rather over a story that he ran on 60 Minutes in the month prior to the 2004 election that attempted to impugn the military record of George W. Bush by relying on allegedly fake documents that were provided by Bill Burkett, a retired Lieutenant Colonel in the Texas Army National Guard. Finally, some countries have laws enforcing balance in state-owned media. Since 1991, the CBC and Radio Canada, its French language counterpart, are governed by the Broadcasting Act. This act states, among other things: ...the programming provided by the Canadian broadcasting system should: (i) be varied and comprehensive, providing a balance of information, enlightenment and entertainment for men, women and children of all ages, interests and tastes, (...) (iv) provide a reasonable opportunity for the public to be exposed to the expression of differing views on matters of public concern Besides these manual approaches, several (semi-)automated approaches have been developed by social scientists and computer scientists. These approaches identify differences in news coverage, which potentially resulted from media bias, by analyzing the text and meta data, such as author and publishing date. For instance, NewsCube is a news aggregator that extracts phrases that describe a topic differently compared to another. Another approach, matrix-based news aggregation, spans a matrix over two dimensions, such as publisher countries (in which articles have been published) and mentioned countries (on which country an article reports). As a result, each cell contains articles that have been published in one country and that report on another country. Particularly in international news topics, such an approach helps to reveal differences in media coverage between the involved countries. Attempts have also been made to utilize machine-learning to analyze the bias of text. For example, person-oriented framing analysis attempts to identify frames, i.e., "perspectives", in news coverage on a topic by determining how each person mentioned in the topic's coverage is portrayed. National and ethnic viewpoint Many news organizations reflect, or are perceived to reflect in some way, the viewpoint of the geographic, ethnic, and national population that they primarily serve. Media within countries are sometimes seen as being sycophantic or unquestioning about the country's government. Western media are often criticized in the rest of the world (including eastern Europe, Asia, Africa, and the Middle East) as being pro-Western with regard to a variety of political, cultural and economic issues. Al Jazeera is frequently criticized both in the West and in the Arab world. The Israeli–Palestinian conflict and wider Arab–Israeli issues are a particularly controversial area, and nearly all coverage of any kind generates accusation of bias from one or both sides. This topic is covered in a separate article. Anglophone bias in the world media It has been observed that the world's principal suppliers of news, the news agencies, and the main buyers of news are Anglophone corporations and this gives an Anglophone bias to the selection and depiction of events. Anglophone definitions of what constitutes news are paramount; the news provided originates in Anglophone capitals and responds first to their own rich domestic markets. Despite the plethora of news services, most news printed and broadcast throughout the world each day comes from only a few major agencies, the three largest of which are the Associated Press, Reuters and Agence France-Presse. Religious bias The media are often accused of bias favoring a particular religion or of bias against a particular religion. In some countries, only reporting approved by a state religion is permitted, whereas in other countries, derogatory statements about any belief system are considered hate crimes and are illegal. The Satanic panic, a moral panic and episode of national hysteria that emerged in the U.S. in the 1980s (and thereafter to Canada, Britain, and Australia), was reinforced by tabloid media and infotainment. Scholar Sarah Hughes, in a study published in 2016, argued that the panic "both reflected and shaped a cultural climate dominated by the overlapping worldviews of politically active conservatives" whose ideology "was incorporated into the panic and reinforced through" tabloid media, sensationalist television and magazine reporting, and local news. Although the panic dissipated in the 1990s after it was discredited by journalists and the courts, Hughes argues that the panic has had an enduring influence in American culture and politics even decades later. In 2012, Huffington Post, columnist Jacques Berlinerblau argued that secularism has often been misinterpreted in the media as another word for atheism, stating that: "Secularism must be the most misunderstood and mangled ism in the American political lexicon. Commentators on the right and the left routinely equate it with Stalinism, Nazism and Socialism, among other dreaded isms. In the United States, of late, another false equation has emerged. That would be the groundless association of secularism with atheism. The religious right has profitably promulgated this misconception at least since the 1970s." According to Stuart A. Wright, there are six factors that contribute to media bias against minority religions: first, the knowledge and familiarity of journalists with the subject matter; second, the degree of cultural accommodation of the targeted religious group; third, limited economic resources available to journalists; fourth, time constraints; fifth, sources of information used by journalists; and finally, the front-end/back-end disproportionality of reporting. According to Yale Law professor Stephen Carter, "it has long been the American habit to be more suspicious of—and more repressive toward—religions that stand outside the mainline Protestant-Roman Catholic-Jewish troika that dominates America's spiritual life." As for front-end/back-end disproportionality, Wright says: "news stories on unpopular or marginal religions frequently are predicated on unsubstantiated allegations or government actions based on faulty or weak evidence occurring at the front-end of an event. As the charges weighed in against material evidence, these cases often disintegrate. Yet rarely is there equal space and attention in the mass media given to the resolution or outcome of the incident. If the accused are innocent, often the public is not made aware." Social media bias Within the United States, Pew Research Center reported that 64% of Americans believed that social media had a toxic effect on U.S. society and culture in July 2020. Only 10% of Americans believed that it had a positive effect on society. Some of the main concerns with social media lie with the spread of deliberately false or misinterpreted information and the spread of hate and extremism. Social scientist experts explain the growth of misinformation and hate as a result of the increase in echo chambers. Fueled by confirmation bias, online echo chambers allow users to be steeped within their own ideology. Because social media is tailored to your interests and your selected friends, it is an easy outlet for political echo chambers. Another Pew Research poll in 2019 showed that 28% of US adults "often" find their news through social media, and 55% of US adults get their news from social media either "often" or "sometimes". Additionally, more people are reported as going to social media for their news as the COVID-19 pandemic has restricted politicians to online campaigns and social media live streams. GCF Global encourages online users to avoid echo chambers by interacting with different people and perspectives along with avoiding the temptation of confirmation bias. Media scholar Siva Vaidhyanathan, in his book Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy (2018), argues that on social media networks, the most emotionally charged and polarizing topics usually predominate, and that "If you wanted to build a machine that would distribute propaganda to millions of people, distract them from important issues, energize hatred and bigotry, erode social trust, undermine journalism, foster doubts about science, and engage in massive surveillance all at once, you would make something a lot like Facebook." In a 2021 report, researchers at the New York University's Stern Center for Business and Human Rights found that Republicans' frequent argument that social media companies like Facebook and Twitter have an "anti-conservative" bias is false and lacks any reliable evidence supporting it; the report found that right-wing voices are in fact dominant on social media, and that the claim that these platforms have an anti-conservative lean "is itself a form of disinformation." A 2021 study in Nature Communications examined political bias on social media by assessing the degree to which Twitter users were exposed to content on the left and right—specifically, exposure on the home timeline (the "news feed"). The study found that conservative Twitter accounts are exposed toward content on the right, whereas liberal accounts are exposed to moderate content, shifting those users' experiences toward the political center. The study determined: "Both in terms of information to which they are exposed and content they produce, drifters initialized with Right-leaning sources stay on the conservative side of the political spectrum. Those initialized with Left-leaning sources, on the other hand, tend to drift toward the political center: they are exposed to more conservative content and even start spreading it." These findings held true for both hashtags and links. The study also found that conservative accounts are exposed to substantially more low-credibility content than other accounts. A 2022 study in PNAS, using a long-running massive-scale randomized experiment, found that the political right enjoys higher algorithmic amplification than the political left in six out of seven countries |
Abd ar-Rahman al-Nasai, Abu Dawood, Ibn Majah, Malik ibn Anas, al-Daraqutni. Some Western academics cautiously view the hadith collections as accurate historical sources. Scholars such as Madelung do not reject the narrations which have been compiled in later periods, but judge them in the context of history and on the basis of their compatibility with the events and figures. Muslim scholars on the other hand typically place a greater emphasis on the hadith literature instead of the biographical literature, since hadiths maintain a traditional chain of transmission (isnad); the lack of such a chain for the biographical literature makes it unverifiable in their eyes. Pre-Islamic Arabia The Arabian Peninsula was, and still is, largely arid with volcanic soil, making agriculture difficult except near oases or springs. Towns and cities dotted the landscape; two of the most prominent being Mecca and Medina. Medina was a large flourishing agricultural settlement, while Mecca was an important financial center for many surrounding tribes. Communal life was essential for survival in the desert conditions, supporting indigenous tribes against the harsh environment and lifestyle. Tribal affiliation, whether based on kinship or alliances, was an important source of social cohesion. Indigenous Arabs were either nomadic or sedentary. Nomadic groups constantly traveled seeking water and pasture for their flocks, while the sedentary settled and focused on trade and agriculture. Nomadic survival also depended on raiding caravans or oases; nomads did not view this as a crime. In pre-Islamic Arabia, gods or goddesses were viewed as protectors of individual tribes, their spirits being associated with sacred trees, stones, springs and wells. As well as being the site of an annual pilgrimage, the Kaaba shrine in Mecca housed 360 idols of tribal patron deities. Three goddesses were worshipped, in some places as daughters of Allah: Allāt, Manāt and al-'Uzzá. Monotheistic communities existed in Arabia, including Christians and Jews. Hanifs – native pre-Islamic Arabs who "professed a rigid monotheism" – are also sometimes listed alongside Jews and Christians in pre-Islamic Arabia, although their historicity is disputed among scholars. According to Muslim tradition, Muhammad himself was a Hanif and one of the descendants of Ishmael, son of Abraham. After a century of exhaustive archaeological investigation, no evidence has been found for a historical Abraham or Ishmael. The second half of the sixth century was a period of political disorder in Arabia and communication routes were no longer secure. Religious divisions were an important cause of the crisis. Judaism became the dominant religion in Yemen while Christianity took root in the Persian Gulf area. In line with broader trends of the ancient world, the region witnessed a decline in the practice of polytheistic cults and a growing interest in a more spiritual form of religion. While many were reluctant to convert to a foreign faith, those faiths provided intellectual and spiritual reference points. During the early years of Muhammad's life, the Quraysh tribe to which he belonged became a dominant force in western Arabia. They formed the cult association of hums, which tied members of many tribes in western Arabia to the Kaaba and reinforced the prestige of the Meccan sanctuary. To counter the effects of anarchy, Quraysh upheld the institution of sacred months during which all violence was forbidden, and it was possible to participate in pilgrimages and fairs without danger. Thus, although the association of hums was primarily religious, it also had important economic consequences for the city. Life Childhood and early life Abu al-Qasim Muhammad ibn Abdullah ibn Abd al-Muttalib ibn Hashim was born in Mecca about the year 570 and his birthday is believed to be in the month of Rabi' al-awwal. He belonged to the Banu Hashim clan, part of the Quraysh tribe, which was one of Mecca's prominent families, although it appears less prosperous during Muhammad's early lifetime. Tradition places the year of Muhammad's birth as corresponding with the Year of the Elephant, which is named after the failed destruction of Mecca that year by the Abraha, Yemen's king, who supplemented his army with elephants. Alternatively some 20th century scholars have suggested different years, such as 568 or 569. Muhammad's father, Abdullah, died almost six months before he was born. According to Islamic tradition, soon after birth he was sent to live with a Bedouin family in the desert, as desert life was considered healthier for infants; some western scholars reject this tradition's historicity. Muhammad stayed with his foster-mother, Halimah bint Abi Dhuayb, and her husband until he was two years old. At the age of six, Muhammad lost his biological mother Amina to illness and became an orphan. For the next two years, until he was eight years old, Muhammad was under the guardianship of his paternal grandfather Abd al-Muttalib, of the Banu Hashim clan until his death. He then came under the care of his uncle Abu Talib, the new leader of the Banu Hashim. According to Islamic historian William Montgomery Watt there was a general disregard by guardians in taking care of weaker members of the tribes in Mecca during the 6th century, "Muhammad's guardians saw that he did not starve to death, but it was hard for them to do more for him, especially as the fortunes of the clan of Hashim seem to have been declining at that time." In his teens, Muhammad accompanied his uncle on Syrian trading journeys to gain experience in commercial trade. Islamic tradition states that when Muhammad was either nine or twelve while accompanying the Meccans' caravan to Syria, he met a Christian monk or hermit named Bahira who is said to have foreseen Muhammad's career as a prophet of God. Little is known of Muhammad during his later youth as available information is fragmented, making it difficult to separate history from legend. It is known that he became a merchant and "was involved in trade between the Indian Ocean and the Mediterranean Sea." Due to his upright character he acquired the nickname "al-Amin" (Arabic: الامين), meaning "faithful, trustworthy" and "al-Sadiq" meaning "truthful" and was sought out as an impartial arbitrator. His reputation attracted a proposal in 595 from Khadijah, a successful businesswoman. Muhammad consented to the marriage, which by all accounts was a happy one. Several years later, according to a narration collected by historian Ibn Ishaq, Muhammad was involved with a well-known story about setting the Black Stone in place in the wall of the Kaaba in 605 CE. The Black Stone, a sacred object, was removed during renovations to the Kaaba. The Meccan leaders could not agree which clan should return the Black Stone to its place. They decided to ask the next man who comes through the gate to make that decision; that man was the 35-year-old Muhammad. This event happened five years before the first revelation by Gabriel to him. He asked for a cloth and laid the Black Stone in its center. The clan leaders held the corners of the cloth and together carried the Black Stone to the right spot, then Muhammad laid the stone, satisfying the honor of all. Beginnings of the Quran Muhammad began to pray alone in a cave named Hira on Mount Jabal al-Nour, near Mecca for several weeks every year.John Henry Haaren, Addison B. Poland (1904), p. 83 Islamic tradition holds that during one of his visits to that cave, in the year 610 the angel Gabriel appeared to him and commanded Muhammad to recite verses that would be included in the Quran. Consensus exists that the first Quranic words revealed were the beginning of Quran 96:1. Muhammad was deeply distressed upon receiving his first revelations. After returning home, Muhammad was consoled and reassured by Khadijah and her Christian cousin, Waraka ibn Nawfal. He also feared that others would dismiss his claims as being possessed. Shi'a tradition states Muhammad was not surprised or frightened at Gabriel's appearance; rather he welcomed the angel, as if he was expected. The initial revelation was followed by a three-year pause (a period known as fatra) during which Muhammad felt depressed and further gave himself to prayers and spiritual practices. When the revelations resumed he was reassured and commanded to begin preaching: "Thy Guardian-Lord hath not forsaken thee, nor is He displeased."Brown (2003), pp. 73–74 Sahih Bukhari narrates Muhammad describing his revelations as "sometimes it is (revealed) like the ringing of a bell". Aisha reported, "I saw the Prophet being inspired Divinely on a very cold day and noticed the sweat dropping from his forehead (as the Inspiration was over)". According to Welch these descriptions may be considered genuine, since they are unlikely to have been forged by later Muslims. Muhammad was confident that he could distinguish his own thoughts from these messages. According to the Quran, one of the main roles of Muhammad is to warn the unbelievers of their eschatological punishment (Quran 38:70, Quran 6:19). Occasionally the Quran did not explicitly refer to Judgment day but provided examples from the history of extinct communities and warns Muhammad's contemporaries of similar calamities (Quran ). Muhammad did not only warn those who rejected God's revelation, but also dispensed good news for those who abandoned evil, listening to the divine words and serving God. Muhammad's mission also involves preaching monotheism: The Quran commands Muhammad to proclaim and praise the name of his Lord and instructs him not to worship idols or associate other deities with God. The key themes of the early Quranic verses included the responsibility of man towards his creator; the resurrection of the dead, God's final judgment followed by vivid descriptions of the tortures in Hell and pleasures in Paradise, and the signs of God in all aspects of life. Religious duties required of the believers at this time were few: belief in God, asking for forgiveness of sins, offering frequent prayers, assisting others particularly those in need, rejecting cheating and the love of wealth (considered to be significant in the commercial life of Mecca), being chaste and not committing female infanticide. Opposition According to Muslim tradition, Muhammad's wife Khadija was the first to believe he was a prophet. She was followed by Muhammad's ten-year-old cousin Ali ibn Abi Talib, close friend Abu Bakr, and adopted son Zaid. Around 613, Muhammad began to preach to the public (Quran ). Most Meccans ignored and mocked him, though a few became his followers. There were three main groups of early converts to Islam: younger brothers and sons of great merchants; people who had fallen out of the first rank in their tribe or failed to attain it; and the weak, mostly unprotected foreigners. According to Ibn Saad, opposition in Mecca started when Muhammad delivered verses that condemned idol worship and the polytheism practiced by the Meccan forefathers. However, the Quranic exegesis maintains that it began as Muhammad started public preaching. As his followers increased, Muhammad became a threat to the local tribes and rulers of the city, whose wealth rested upon the Ka'aba, the focal point of Meccan religious life that Muhammad threatened to overthrow. Muhammad's denunciation of the Meccan traditional religion was especially offensive to his own tribe, the Quraysh, as they were the guardians of the Ka'aba. Powerful merchants attempted to convince Muhammad to abandon his preaching; he was offered admission to the inner circle of merchants, as well as an advantageous marriage. He refused both of these offers. Tradition records at great length the persecution and ill-treatment towards Muhammad and his followers. Sumayyah bint Khayyat, a slave of a prominent Meccan leader Abu Jahl, is famous as the first martyr of Islam; killed with a spear by her master when she refused to give up her faith. Bilal, another Muslim slave, was tortured by Umayyah ibn Khalaf who placed a heavy rock on his chest to force his conversion. In 615, some of Muhammad's followers emigrated to the Ethiopian Kingdom of Aksum and founded a small colony under the protection of the Christian Ethiopian emperor Aṣḥama ibn Abjar. Ibn Sa'ad mentions two separate migrations. According to him, most of the Muslims returned to Mecca prior to Hijra, while a second group rejoined them in Medina. Ibn Hisham and Tabari, however, only talk about one migration to Ethiopia. These accounts agree that Meccan persecution played a major role in Muhammad's decision to suggest that a number of his followers seek refuge among the Christians in Abyssinia. According to the famous letter of ʿUrwa preserved in al-Tabari, the majority of Muslims returned to their native town as Islam gained strength and high ranking Meccans, such as Umar and Hamzah converted. However, there is a completely different story on the reason why the Muslims returned from Ethiopia to Mecca. According to this account—initially mentioned by Al-Waqidi then rehashed by Ibn Sa'ad and Tabari, but not by Ibn Hisham and not by Ibn Ishaq—Muhammad, desperately hoping for an accommodation with his tribe, pronounced a verse acknowledging the existence of three Meccan goddesses considered to be the daughters of Allah. Muhammad retracted the verses the next day at the behest of Gabriel, claiming that the verses were whispered by the devil himself. Instead, a ridicule of these gods was offered. This episode, known as "The Story of the Cranes," is also known as "Satanic Verses". According to the story, this led to a general reconciliation between Muhammad and the Meccans, and the Abyssinia Muslims began to return home. When they arrived Gabriel had informed Muhammad that the two verses were not part of the revelation, but had been inserted by Satan. Notable scholars at the time argued against the historic authenticity of these verses and the story itself on various grounds. Al-Waqidi was severely criticized by Islamic scholars such as Malik ibn Anas, al-Shafi'i, Ahmad ibn Hanbal, Al-Nasa'i, al-Bukhari, Abu Dawood, Al-Nawawi and others as a liar and forger. Later, the incident received some acceptance among certain groups, though strong objections to it continued onwards past the tenth century. The objections continued until rejection of these verses and the story itself eventually became the only acceptable orthodox Muslim position. In 616 (or 617), the leaders of Makhzum and Banu Abd-Shams, two important Quraysh clans, declared a public boycott against Banu Hashim, their commercial rival, to pressure it into withdrawing its protection of Muhammad. The boycott lasted three years but eventually collapsed as it failed in its objective. During this time, Muhammad was able to preach only during the holy pilgrimage months in which all hostilities between Arabs were suspended. Isra and Mi'raj Islamic tradition states that in 620, Muhammad experienced the Isra and Mi'raj, a miraculous night-long journey said to have occurred with the angel Gabriel. At the journey's beginning, the Isra, he is said to have traveled from Mecca on a winged steed to "the farthest mosque." Later, during the Mi'raj, Muhammad is said to have toured heaven and hell, and spoke with earlier prophets, such as Abraham, Moses, and Jesus. Ibn Ishaq, author of the first biography of Muhammad, presents the event as a spiritual experience; later historians, such as Al-Tabari and Ibn Kathir, present it as a physical journey. Some western scholars hold that the Isra and Mi'raj journey traveled through the heavens from the sacred enclosure at Mecca to the celestial al-Baytu l-Maʿmur (heavenly prototype of the Kaaba); later traditions indicate Muhammad's journey as having been from Mecca to Jerusalem. Last years before Hijra Muhammad's wife Khadijah and uncle Abu Talib both died in 619, the year thus being known as the "Year of Sorrow". With the death of Abu Talib, leadership of the Banu Hashim clan passed to Abu Lahab, a tenacious enemy of Muhammad. Soon afterward, Abu Lahab withdrew the clan's protection over Muhammad. This placed Muhammad in danger; the withdrawal of clan protection implied that blood revenge for his killing would not be exacted. Muhammad then visited Ta'if, another important city in Arabia, and tried to find a protector, but his effort failed and further brought him into physical danger. Muhammad was forced to return to Mecca. A Meccan man named Mut'im ibn Adi (and the protection of the tribe of Banu Nawfal) made it possible for him to safely re-enter his native city. Many people visited Mecca on business or as pilgrims to the Kaaba. Muhammad took this opportunity to look for a new home for himself and his followers. After several unsuccessful negotiations, he found hope with some men from Yathrib (later called Medina). The Arab population of Yathrib were familiar with monotheism and were prepared for the appearance of a prophet because a Jewish community existed there. They also hoped, by the means of Muhammad and the new faith, to gain supremacy over Mecca; the Yathrib were jealous of its importance as the place of pilgrimage. Converts to Islam came from nearly all Arab tribes in Medina; by June of the subsequent year, seventy-five Muslims came to Mecca for pilgrimage and to meet Muhammad. Meeting him secretly by night, the group made what is known as the "Second Pledge of al-'Aqaba", or, in Orientalists' view, the "Pledge of War". Following the pledges at Aqabah, Muhammad encouraged his followers to emigrate to Yathrib. As with the migration to Abyssinia, the Quraysh attempted to stop the emigration. However, almost all Muslims managed to leave. Hijra The Hijra is the migration of Muhammad and his followers from Mecca to Medina in 622 CE. In June 622, warned of a plot to assassinate him, Muhammad secretly slipped out of Mecca and moved his followers to Medina, north of Mecca. Migration to Medina A delegation, consisting of the representatives of the twelve important clans of Medina, invited Muhammad to serve as chief arbitrator for the entire community; due to his status as a neutral outsider. There was fighting in Yathrib: primarily the dispute involved its Arab and Jewish inhabitants, and was estimated to have lasted for around a hundred years before 620. The recurring slaughters and disagreements over the resulting claims, especially after the Battle of Bu'ath in which all clans were involved, made it obvious to them that the tribal concept of blood-feud and an eye for an eye were no longer workable unless there was one man with authority to adjudicate in disputed cases. The delegation from Medina pledged themselves and their fellow-citizens to accept Muhammad into their community and physically protect him as one of themselves. Muhammad instructed his followers to emigrate to Medina, until nearly all his followers left Mecca. Being alarmed at the departure, according to tradition, the Meccans plotted to assassinate Muhammad. With the help of Ali, Muhammad fooled the Meccans watching him, and secretly slipped away from the town with Abu Bakr. By 622, Muhammad emigrated to Medina, a large agricultural oasis. Those who migrated from Mecca along with Muhammad became known as muhajirun (emigrants). Establishment of a new polity Among the first things Muhammad did to ease the longstanding grievances among the tribes of Medina was to draft a document known as the Constitution of Medina, "establishing a kind of alliance or federation" among the eight Medinan tribes and Muslim emigrants from Mecca; this specified rights and duties of all citizens, and the relationship of the different communities in Medina (including the Muslim community to other communities, specifically the Jews and other "Peoples of the Book"). The community defined in the Constitution of Medina, Ummah, had a religious outlook, also shaped by practical considerations and substantially preserved the legal forms of the old Arab tribes. The first group of converts to Islam in Medina were the clans without great leaders; these clans had been subjugated by hostile leaders from outside. This was followed by the general acceptance of Islam by the pagan population of Medina, with some exceptions. According to Ibn Ishaq, this was influenced by the conversion of Sa'd ibn Mu'adh (a prominent Medinan leader) to Islam. Medinans who converted to Islam and helped the Muslim emigrants find shelter became known as the ansar (supporters). Then Muhammad instituted brotherhood between the emigrants and the supporters and he chose Ali as his own brother. Beginning of armed conflict Following the emigration, the people of Mecca seized property of Muslim emigrants to Medina. War would later break out between the people of Mecca and the Muslims. Muhammad delivered Quranic verses permitting Muslims to fight the Meccans (see sura Al-Hajj, Quran ). According to the traditional account, on 11 February 624, while praying in the Masjid al-Qiblatayn in Medina, Muhammad received revelations from God that he should be facing Mecca rather than Jerusalem during prayer. Muhammad adjusted to the new direction, and his companions praying with him followed his lead, beginning the tradition of facing Mecca during prayer. Muhammad ordered a number of raids to capture Meccan caravans, but only the 8th of them, the Raid of Nakhla, resulted in actual fighting and capture of booty and prisoners. In March 624, Muhammad led some three hundred warriors in a raid on a Meccan merchant caravan. The Muslims set an ambush for the caravan at Badr. Aware of the plan, the Meccan caravan eluded the Muslims. A Meccan force was sent to protect the caravan and went on to confront the Muslims upon receiving word that the caravan was safe. The Battle of Badr commenced. Though outnumbered more than three to one, the Muslims won the battle, killing at least forty-five Meccans with fourteen Muslims dead. They also succeeded in killing many Meccan leaders, including Abu Jahl. Seventy prisoners had been acquired, many of whom were ransomed. Muhammad and his followers saw the victory as confirmation of their faith and Muhammad ascribed the victory to the assistance of an invisible host of angels. The Quranic verses of this period, unlike the Meccan verses, dealt with practical problems of government and issues like the distribution of spoils. The victory strengthened Muhammad's position in Medina and dispelled earlier doubts among his followers. As a result, the opposition to him became less vocal. Pagans who had not yet converted were very bitter | and figures. Muslim scholars on the other hand typically place a greater emphasis on the hadith literature instead of the biographical literature, since hadiths maintain a traditional chain of transmission (isnad); the lack of such a chain for the biographical literature makes it unverifiable in their eyes. Pre-Islamic Arabia The Arabian Peninsula was, and still is, largely arid with volcanic soil, making agriculture difficult except near oases or springs. Towns and cities dotted the landscape; two of the most prominent being Mecca and Medina. Medina was a large flourishing agricultural settlement, while Mecca was an important financial center for many surrounding tribes. Communal life was essential for survival in the desert conditions, supporting indigenous tribes against the harsh environment and lifestyle. Tribal affiliation, whether based on kinship or alliances, was an important source of social cohesion. Indigenous Arabs were either nomadic or sedentary. Nomadic groups constantly traveled seeking water and pasture for their flocks, while the sedentary settled and focused on trade and agriculture. Nomadic survival also depended on raiding caravans or oases; nomads did not view this as a crime. In pre-Islamic Arabia, gods or goddesses were viewed as protectors of individual tribes, their spirits being associated with sacred trees, stones, springs and wells. As well as being the site of an annual pilgrimage, the Kaaba shrine in Mecca housed 360 idols of tribal patron deities. Three goddesses were worshipped, in some places as daughters of Allah: Allāt, Manāt and al-'Uzzá. Monotheistic communities existed in Arabia, including Christians and Jews. Hanifs – native pre-Islamic Arabs who "professed a rigid monotheism" – are also sometimes listed alongside Jews and Christians in pre-Islamic Arabia, although their historicity is disputed among scholars. According to Muslim tradition, Muhammad himself was a Hanif and one of the descendants of Ishmael, son of Abraham. After a century of exhaustive archaeological investigation, no evidence has been found for a historical Abraham or Ishmael. The second half of the sixth century was a period of political disorder in Arabia and communication routes were no longer secure. Religious divisions were an important cause of the crisis. Judaism became the dominant religion in Yemen while Christianity took root in the Persian Gulf area. In line with broader trends of the ancient world, the region witnessed a decline in the practice of polytheistic cults and a growing interest in a more spiritual form of religion. While many were reluctant to convert to a foreign faith, those faiths provided intellectual and spiritual reference points. During the early years of Muhammad's life, the Quraysh tribe to which he belonged became a dominant force in western Arabia. They formed the cult association of hums, which tied members of many tribes in western Arabia to the Kaaba and reinforced the prestige of the Meccan sanctuary. To counter the effects of anarchy, Quraysh upheld the institution of sacred months during which all violence was forbidden, and it was possible to participate in pilgrimages and fairs without danger. Thus, although the association of hums was primarily religious, it also had important economic consequences for the city. Life Childhood and early life Abu al-Qasim Muhammad ibn Abdullah ibn Abd al-Muttalib ibn Hashim was born in Mecca about the year 570 and his birthday is believed to be in the month of Rabi' al-awwal. He belonged to the Banu Hashim clan, part of the Quraysh tribe, which was one of Mecca's prominent families, although it appears less prosperous during Muhammad's early lifetime. Tradition places the year of Muhammad's birth as corresponding with the Year of the Elephant, which is named after the failed destruction of Mecca that year by the Abraha, Yemen's king, who supplemented his army with elephants. Alternatively some 20th century scholars have suggested different years, such as 568 or 569. Muhammad's father, Abdullah, died almost six months before he was born. According to Islamic tradition, soon after birth he was sent to live with a Bedouin family in the desert, as desert life was considered healthier for infants; some western scholars reject this tradition's historicity. Muhammad stayed with his foster-mother, Halimah bint Abi Dhuayb, and her husband until he was two years old. At the age of six, Muhammad lost his biological mother Amina to illness and became an orphan. For the next two years, until he was eight years old, Muhammad was under the guardianship of his paternal grandfather Abd al-Muttalib, of the Banu Hashim clan until his death. He then came under the care of his uncle Abu Talib, the new leader of the Banu Hashim. According to Islamic historian William Montgomery Watt there was a general disregard by guardians in taking care of weaker members of the tribes in Mecca during the 6th century, "Muhammad's guardians saw that he did not starve to death, but it was hard for them to do more for him, especially as the fortunes of the clan of Hashim seem to have been declining at that time." In his teens, Muhammad accompanied his uncle on Syrian trading journeys to gain experience in commercial trade. Islamic tradition states that when Muhammad was either nine or twelve while accompanying the Meccans' caravan to Syria, he met a Christian monk or hermit named Bahira who is said to have foreseen Muhammad's career as a prophet of God. Little is known of Muhammad during his later youth as available information is fragmented, making it difficult to separate history from legend. It is known that he became a merchant and "was involved in trade between the Indian Ocean and the Mediterranean Sea." Due to his upright character he acquired the nickname "al-Amin" (Arabic: الامين), meaning "faithful, trustworthy" and "al-Sadiq" meaning "truthful" and was sought out as an impartial arbitrator. His reputation attracted a proposal in 595 from Khadijah, a successful businesswoman. Muhammad consented to the marriage, which by all accounts was a happy one. Several years later, according to a narration collected by historian Ibn Ishaq, Muhammad was involved with a well-known story about setting the Black Stone in place in the wall of the Kaaba in 605 CE. The Black Stone, a sacred object, was removed during renovations to the Kaaba. The Meccan leaders could not agree which clan should return the Black Stone to its place. They decided to ask the next man who comes through the gate to make that decision; that man was the 35-year-old Muhammad. This event happened five years before the first revelation by Gabriel to him. He asked for a cloth and laid the Black Stone in its center. The clan leaders held the corners of the cloth and together carried the Black Stone to the right spot, then Muhammad laid the stone, satisfying the honor of all. Beginnings of the Quran Muhammad began to pray alone in a cave named Hira on Mount Jabal al-Nour, near Mecca for several weeks every year.John Henry Haaren, Addison B. Poland (1904), p. 83 Islamic tradition holds that during one of his visits to that cave, in the year 610 the angel Gabriel appeared to him and commanded Muhammad to recite verses that would be included in the Quran. Consensus exists that the first Quranic words revealed were the beginning of Quran 96:1. Muhammad was deeply distressed upon receiving his first revelations. After returning home, Muhammad was consoled and reassured by Khadijah and her Christian cousin, Waraka ibn Nawfal. He also feared that others would dismiss his claims as being possessed. Shi'a tradition states Muhammad was not surprised or frightened at Gabriel's appearance; rather he welcomed the angel, as if he was expected. The initial revelation was followed by a three-year pause (a period known as fatra) during which Muhammad felt depressed and further gave himself to prayers and spiritual practices. When the revelations resumed he was reassured and commanded to begin preaching: "Thy Guardian-Lord hath not forsaken thee, nor is He displeased."Brown (2003), pp. 73–74 Sahih Bukhari narrates Muhammad describing his revelations as "sometimes it is (revealed) like the ringing of a bell". Aisha reported, "I saw the Prophet being inspired Divinely on a very cold day and noticed the sweat dropping from his forehead (as the Inspiration was over)". According to Welch these descriptions may be considered genuine, since they are unlikely to have been forged by later Muslims. Muhammad was confident that he could distinguish his own thoughts from these messages. According to the Quran, one of the main roles of Muhammad is to warn the unbelievers of their eschatological punishment (Quran 38:70, Quran 6:19). Occasionally the Quran did not explicitly refer to Judgment day but provided examples from the history of extinct communities and warns Muhammad's contemporaries of similar calamities (Quran ). Muhammad did not only warn those who rejected God's revelation, but also dispensed good news for those who abandoned evil, listening to the divine words and serving God. Muhammad's mission also involves preaching monotheism: The Quran commands Muhammad to proclaim and praise the name of his Lord and instructs him not to worship idols or associate other deities with God. The key themes of the early Quranic verses included the responsibility of man towards his creator; the resurrection of the dead, God's final judgment followed by vivid descriptions of the tortures in Hell and pleasures in Paradise, and the signs of God in all aspects of life. Religious duties required of the believers at this time were few: belief in God, asking for forgiveness of sins, offering frequent prayers, assisting others particularly those in need, rejecting cheating and the love of wealth (considered to be significant in the commercial life of Mecca), being chaste and not committing female infanticide. Opposition According to Muslim tradition, Muhammad's wife Khadija was the first to believe he was a prophet. She was followed by Muhammad's ten-year-old cousin Ali ibn Abi Talib, close friend Abu Bakr, and adopted son Zaid. Around 613, Muhammad began to preach to the public (Quran ). Most Meccans ignored and mocked him, though a few became his followers. There were three main groups of early converts to Islam: younger brothers and sons of great merchants; people who had fallen out of the first rank in their tribe or failed to attain it; and the weak, mostly unprotected foreigners. According to Ibn Saad, opposition in Mecca started when Muhammad delivered verses that condemned idol worship and the polytheism practiced by the Meccan forefathers. However, the Quranic exegesis maintains that it began as Muhammad started public preaching. As his followers increased, Muhammad became a threat to the local tribes and rulers of the city, whose wealth rested upon the Ka'aba, the focal point of Meccan religious life that Muhammad threatened to overthrow. Muhammad's denunciation of the Meccan traditional religion was especially offensive to his own tribe, the Quraysh, as they were the guardians of the Ka'aba. Powerful merchants attempted to convince Muhammad to abandon his preaching; he was offered admission to the inner circle of merchants, as well as an advantageous marriage. He refused both of these offers. Tradition records at great length the persecution and ill-treatment towards Muhammad and his followers. Sumayyah bint Khayyat, a slave of a prominent Meccan leader Abu Jahl, is famous as the first martyr of Islam; killed with a spear by her master when she refused to give up her faith. Bilal, another Muslim slave, was tortured by Umayyah ibn Khalaf who placed a heavy rock on his chest to force his conversion. In 615, some of Muhammad's followers emigrated to the Ethiopian Kingdom of Aksum and founded a small colony under the protection of the Christian Ethiopian emperor Aṣḥama ibn Abjar. Ibn Sa'ad mentions two separate migrations. According to him, most of the Muslims returned to Mecca prior to Hijra, while a second group rejoined them in Medina. Ibn Hisham and Tabari, however, only talk about one migration to Ethiopia. These accounts agree that Meccan persecution played a major role in Muhammad's decision to suggest that a number of his followers seek refuge among the Christians in Abyssinia. According to the famous letter of ʿUrwa preserved in al-Tabari, the majority of Muslims returned to their native town as Islam gained strength and high ranking Meccans, such as Umar and Hamzah converted. However, there is a completely different story on the reason why the Muslims returned from Ethiopia to Mecca. According to this account—initially mentioned by Al-Waqidi then rehashed by Ibn Sa'ad and Tabari, but not by Ibn Hisham and not by Ibn Ishaq—Muhammad, desperately hoping for an accommodation with his tribe, pronounced a verse acknowledging the existence of three Meccan goddesses considered to be the daughters of Allah. Muhammad retracted the verses the next day at the behest of Gabriel, claiming that the verses were whispered by the devil himself. Instead, a ridicule of these gods was offered. This episode, known as "The Story of the Cranes," is also known as "Satanic Verses". According to the story, this led to a general reconciliation between Muhammad and the Meccans, and the Abyssinia Muslims began to return home. When they arrived Gabriel had informed Muhammad that the two verses were not part of the revelation, but had been inserted by Satan. Notable scholars at the time argued against the historic authenticity of these verses and the story itself on various grounds. Al-Waqidi was severely criticized by Islamic scholars such as Malik ibn Anas, al-Shafi'i, Ahmad ibn Hanbal, Al-Nasa'i, al-Bukhari, Abu Dawood, Al-Nawawi and others as a liar and forger. Later, the incident received some acceptance among certain groups, though strong objections to it continued onwards past the tenth century. The objections continued until rejection of these verses and the story itself eventually became the only acceptable orthodox Muslim position. In 616 (or 617), the leaders of Makhzum and Banu Abd-Shams, two important Quraysh clans, declared a public boycott against Banu Hashim, their commercial rival, to pressure it into withdrawing its protection of Muhammad. The boycott lasted three years but eventually collapsed as it failed in its objective. During this time, Muhammad was able to preach only during the holy pilgrimage months in which all hostilities between Arabs were suspended. Isra and Mi'raj Islamic tradition states that in 620, Muhammad experienced the Isra and Mi'raj, a miraculous night-long journey said to have occurred with the angel Gabriel. At the journey's beginning, the Isra, he is said to have traveled from Mecca on a winged steed to "the farthest mosque." Later, during the Mi'raj, Muhammad is said to have toured heaven and hell, and spoke with earlier prophets, such as Abraham, Moses, and Jesus. Ibn Ishaq, author of the first biography of Muhammad, presents the event as a spiritual experience; later historians, such as Al-Tabari and Ibn Kathir, present it as a physical journey. Some western scholars hold that the Isra and Mi'raj journey traveled through the heavens from the sacred enclosure at Mecca to the celestial al-Baytu l-Maʿmur (heavenly prototype of the Kaaba); later traditions indicate Muhammad's journey as having been from Mecca to Jerusalem. Last years before Hijra Muhammad's wife Khadijah and uncle Abu Talib both died in 619, the year thus being known as the "Year of Sorrow". With the death of Abu Talib, leadership of the Banu Hashim clan passed to Abu Lahab, a tenacious enemy of Muhammad. Soon afterward, Abu Lahab withdrew the clan's protection over Muhammad. This placed Muhammad in danger; the withdrawal of clan protection implied that blood revenge for his killing would not be exacted. Muhammad then visited Ta'if, another important city in Arabia, and tried to find a protector, but his effort failed and further brought him into physical danger. Muhammad was forced to return to Mecca. A Meccan man named Mut'im ibn Adi (and the protection of the tribe of Banu Nawfal) made it possible for him to safely re-enter his native city. Many people visited Mecca on business or as pilgrims to the Kaaba. Muhammad took this opportunity to look for a new home for himself and his followers. After several unsuccessful negotiations, he found hope with some men from Yathrib (later called Medina). The Arab population of Yathrib were familiar with monotheism and were prepared for the appearance of a prophet because a Jewish community existed there. They also hoped, by the means of Muhammad and the new faith, to gain supremacy over Mecca; the Yathrib were jealous of its importance as the place of pilgrimage. Converts to Islam came from nearly all Arab tribes in Medina; by June of the subsequent year, seventy-five Muslims came to Mecca for pilgrimage and to meet Muhammad. Meeting him secretly by night, the group made what is known as the "Second Pledge of al-'Aqaba", or, in Orientalists' view, the "Pledge of War". Following the pledges at Aqabah, Muhammad encouraged his followers to emigrate to Yathrib. As with the migration to Abyssinia, the Quraysh attempted to stop the emigration. However, almost all Muslims managed to leave. Hijra The Hijra is the migration of Muhammad and his followers from Mecca to Medina in 622 CE. In June 622, warned of a plot to assassinate him, Muhammad secretly slipped out of Mecca and moved his followers to Medina, north of Mecca. Migration to Medina A delegation, consisting of the representatives of the twelve important clans of Medina, invited Muhammad to serve as chief arbitrator for the entire community; due to his status as a neutral outsider. There was fighting in Yathrib: primarily the dispute involved its Arab and Jewish inhabitants, and was estimated to have lasted for around a hundred years before 620. The recurring slaughters and disagreements over the resulting claims, especially after the Battle of Bu'ath in which all clans were involved, made it obvious to them that the tribal concept of blood-feud and an eye for an eye were no longer workable unless there was one man with authority to adjudicate in disputed cases. The delegation from Medina pledged themselves and their fellow-citizens to accept Muhammad into their community and physically protect him as one of themselves. Muhammad instructed his followers to emigrate to Medina, until nearly all his followers left Mecca. Being alarmed at the departure, according to tradition, the Meccans plotted to assassinate Muhammad. With the help of Ali, Muhammad fooled the Meccans watching him, and secretly slipped away from the town with Abu Bakr. By 622, Muhammad emigrated to Medina, a large agricultural oasis. Those who migrated from Mecca along with Muhammad became known as muhajirun (emigrants). Establishment of a new polity Among the first things Muhammad did to ease the longstanding grievances among the tribes of Medina was to draft a document known as the Constitution of Medina, "establishing a kind of alliance or federation" among the eight Medinan tribes and Muslim emigrants from Mecca; this specified rights and duties of all citizens, and the relationship of the different communities in Medina (including the Muslim community to other communities, specifically the Jews and other "Peoples of the Book"). The community defined in the Constitution of Medina, Ummah, had a religious outlook, also shaped by practical considerations and substantially preserved the legal forms of the old Arab tribes. The first group of converts to Islam in Medina were the clans without great leaders; these clans had been subjugated by hostile leaders from outside. This was followed by the general acceptance of Islam by the pagan population of Medina, with some exceptions. According to Ibn Ishaq, this was influenced by the conversion of Sa'd ibn Mu'adh (a prominent Medinan leader) to Islam. Medinans who converted to Islam and helped the Muslim emigrants find shelter became known as the ansar (supporters). Then Muhammad instituted brotherhood between the emigrants and the supporters and he chose Ali as his own brother. Beginning of armed conflict Following the emigration, the people of Mecca seized property of Muslim emigrants to Medina. War would later break out between the people of Mecca and the Muslims. Muhammad delivered Quranic verses permitting Muslims to fight the Meccans (see sura Al-Hajj, Quran ). According to the traditional account, on 11 February 624, while praying in the Masjid al-Qiblatayn in Medina, Muhammad received revelations from God that he should be facing Mecca rather than Jerusalem during prayer. Muhammad adjusted to the new direction, and his companions praying with him followed his lead, beginning the tradition of facing Mecca during prayer. Muhammad ordered a number of raids to capture Meccan caravans, but only the 8th of them, the Raid of Nakhla, resulted in actual fighting and capture of booty and prisoners. In March 624, Muhammad led some three hundred warriors in a raid on a Meccan merchant caravan. The Muslims set an ambush for the caravan at Badr. Aware of the plan, the Meccan caravan eluded the Muslims. A Meccan force was sent to protect the caravan and went on to confront the Muslims upon receiving word that the caravan was safe. The Battle of Badr commenced. Though outnumbered more than three to one, the Muslims won the battle, killing at least forty-five Meccans with fourteen Muslims dead. They also succeeded in killing many Meccan leaders, including Abu Jahl. Seventy prisoners had been acquired, many of whom were ransomed. Muhammad and his followers saw the victory as confirmation of their faith and Muhammad ascribed the victory to the assistance of an invisible host of angels. The Quranic verses of this period, unlike the Meccan verses, dealt with practical problems of government and issues like the distribution of spoils. The victory strengthened Muhammad's position in Medina and dispelled earlier doubts among his followers. As a result, the opposition to him became less vocal. Pagans who had not yet converted were very bitter about the advance of Islam. Two pagans, Asma bint Marwan of the Aws Manat tribe and Abu 'Afak of the 'Amr b. 'Awf tribe, had composed verses taunting and insulting the Muslims. They were killed by people belonging to their own or related clans, and Muhammad did not disapprove of the killings. This report, however, is considered by some to be a fabrication. Most members of those tribes converted to Islam, and little pagan opposition remained. Muhammad expelled from Medina the Banu Qaynuqa, one of three main Jewish tribes, but some historians contend that the expulsion happened after Muhammad's death. According to al-Waqidi, after Abd-Allah ibn Ubaiy spoke for them, Muhammad refrained from executing them and commanded that they be exiled from Medina. Following the Battle of Badr, Muhammad also made mutual-aid alliances with a number of Bedouin tribes to protect his community from attacks from the northern part of Hejaz. Conflict with Mecca The Meccans were eager to avenge their defeat. To maintain economic prosperity, the Meccans needed to restore their prestige, which had been reduced at Badr. In the ensuing months, the Meccans sent ambush parties to Medina while Muhammad led expeditions against tribes allied with Mecca and sent raiders onto a Meccan caravan. Abu Sufyan gathered an army of 3000 men and set out for an attack on Medina. A scout alerted Muhammad of the Meccan army's presence and numbers a day later. The next morning, at the Muslim conference of war, a dispute arose over how best to repel the Meccans. Muhammad and many senior figures suggested it would be safer to fight within Medina and take advantage of the heavily fortified strongholds. Younger Muslims argued that the Meccans were destroying crops, and huddling in the strongholds would destroy Muslim prestige. Muhammad eventually conceded to the younger Muslims and readied the Muslim force for battle. Muhammad led his force outside to the mountain of Uhud (the location of the Meccan camp) and fought the Battle of Uhud on 23 March 625. Although the Muslim army had the advantage in early encounters, lack of discipline on the part of strategically placed archers led to a Muslim defeat; 75 Muslims were killed, including Hamza, Muhammad's uncle who became one of the best known martyrs in the Muslim tradition. The Meccans did not pursue the Muslims; instead, they marched back to Mecca declaring victory. The announcement is probably because Muhammad was wounded and thought dead. When they discovered that Muhammad lived, the Meccans did not return due to false information about new forces coming to his aid. The attack had failed to achieve their aim of completely destroying the Muslims. The Muslims buried the dead and returned to Medina that evening. Questions accumulated about the reasons for the loss; Muhammad delivered Quranic verses indicating that the defeat was twofold: partly a punishment for disobedience, partly a test for steadfastness. Abu Sufyan directed his effort towards another attack on Medina. He gained support from the nomadic tribes to the north and east of Medina; using propaganda about Muhammad's weakness, promises of booty, memories of Quraysh prestige and through bribery. Muhammad's new policy was to prevent alliances against him. Whenever alliances against Medina were formed, he sent out expeditions to break them up. Muhammad heard of men massing with hostile intentions against Medina, and reacted in a severe manner. One example is the assassination of Ka'b ibn al-Ashraf, a chieftain of the Jewish tribe of Banu Nadir. Al-Ashraf went to Mecca and wrote poems that roused the Meccans' grief, anger and desire for revenge after the Battle of Badr. Around a year later, Muhammad expelled the Banu Nadir from Medina forcing their emigration to Syria; he allowed them to take some possessions, as he was unable to subdue the Banu Nadir in their strongholds. The rest of their property was claimed by Muhammad in the name of God as it was not gained with bloodshed. Muhammad surprised various Arab tribes, individually, with overwhelming force, causing his enemies to unite to annihilate him. Muhammad's attempts to prevent a confederation against him were unsuccessful, though he was able to increase his own forces and stopped many potential tribes from joining his enemies. Siege of Medina With the help of the exiled Banu Nadir, the Quraysh military leader Abu Sufyan mustered a force of 10,000 men. Muhammad prepared a force of about 3,000 men and adopted a form of defense unknown in Arabia at that time; the Muslims dug a trench wherever Medina lay open to cavalry attack. The idea is credited to a Persian convert to Islam, Salman the Persian. The siege of Medina began on 31 March 627 and lasted two weeks. Abu Sufyan's troops were unprepared for the fortifications, and after an ineffectual siege, the coalition decided to return home. The Quran discusses this battle in sura Al-Ahzab, in verses . During the battle, the Jewish tribe of Banu Qurayza, located to the south of Medina, entered into negotiations with Meccan forces to revolt against Muhammad. Although the Meccan forces were swayed by suggestions that Muhammad was sure to be overwhelmed, they desired reassurance in case the confederacy was unable to destroy him. No agreement was reached after prolonged negotiations, partly due to sabotage attempts by Muhammad's scouts. After the coalition's retreat, the Muslims accused the Banu Qurayza of treachery and besieged them in their forts for 25 days. The Banu Qurayza eventually surrendered; according to Ibn Ishaq, all the men apart from a few converts to Islam were beheaded, while the women and children were enslaved. Walid N. Arafat and Barakat Ahmad have disputed the accuracy of Ibn Ishaq's narrative. Arafat believes that Ibn Ishaq's Jewish sources, speaking over 100 years after the event, conflated this account with memories of earlier massacres in Jewish history; he notes that Ibn Ishaq was considered an unreliable historian by his contemporary Malik ibn Anas, and a transmitter of "odd tales" by the later Ibn Hajar. Ahmad argues that only some of the tribe were killed, while some of the fighters were merely enslaved. Watt finds Arafat's arguments "not entirely convincing", while Meir J. Kister has contradicted the arguments of Arafat and Ahmad. In the siege of Medina, the Meccans exerted the available strength to destroy the Muslim community. The failure resulted in a significant loss of prestige; their trade with Syria vanished. Following the Battle of the Trench, Muhammad made two expeditions to the north, both ended without any fighting. While returning from one of these journeys (or some years earlier according to other early accounts), an accusation of adultery was made against Aisha, Muhammad's wife. Aisha was exonerated from accusations when Muhammad announced he had received a revelation confirming Aisha's innocence and directing that charges of adultery be supported by four eyewitnesses (sura 24, An-Nur). Truce of Hudaybiyyah Although Muhammad had delivered Quranic verses commanding the Hajj, the Muslims had not performed it due to Quraysh enmity. In the month of Shawwal 628, Muhammad ordered his followers to obtain sacrificial animals and to prepare for a pilgrimage (umrah) to Mecca, saying that God had promised him the fulfillment of this goal in a vision when he was shaving his head after completion of the Hajj. Upon hearing of the approaching 1,400 Muslims, the Quraysh dispatched 200 cavalry to halt them. Muhammad evaded them by taking a more difficult route, enabling his followers to reach al-Hudaybiyya just outside Mecca. According to Watt, although Muhammad's decision to make the pilgrimage was based on his dream, he was also demonstrating to the pagan Meccans that Islam did not threaten the prestige of the sanctuaries, that Islam was an Arabian religion. Negotiations commenced with emissaries traveling to and from Mecca. While these continued, rumors spread that one of the Muslim negotiators, Uthman bin al-Affan, had been killed by the Quraysh. Muhammad called upon the pilgrims to make a pledge not to flee (or to stick with Muhammad, whatever decision he made) if the situation descended into war with Mecca. This pledge became known as the "Pledge of Acceptance" or the "Pledge under the Tree". News of Uthman's safety allowed for negotiations to continue, and a treaty scheduled to last ten years was eventually signed between the Muslims and Quraysh. The main points of the treaty included: cessation of hostilities, the deferral of Muhammad's pilgrimage to the following year, and agreement to send back any Meccan who emigrated to Medina without permission from their protector. Many Muslims were not satisfied with the treaty. However, the Quranic sura "Al-Fath" (The Victory) (Quran ) assured them that the expedition must be considered a victorious one. It was later that Muhammad's followers realized the benefit behind the treaty. These benefits included the requirement of the Meccans to identify Muhammad as an equal, cessation of military activity allowing Medina to gain strength, and the admiration of Meccans who were impressed by the pilgrimage rituals. After signing the truce, Muhammad assembled an expedition against the Jewish oasis of Khaybar, known as the Battle of Khaybar. This was possibly due to housing the Banu Nadir who were inciting hostilities against Muhammad, or to regain prestige from what appeared as the inconclusive result of the truce of Hudaybiyya. According to Muslim tradition, Muhammad also sent letters to many rulers, asking them to convert to Islam (the exact date is given variously in the sources). He sent messengers (with letters) to Heraclius of the Byzantine Empire (the eastern Roman Empire), Khosrau of Persia, the chief of Yemen and to some others. In the years following the truce of Hudaybiyya, Muhammad directed his forces against the Arabs on Transjordanian Byzantine soil in the Battle of Mu'tah. Final years Conquest of Mecca The truce of Hudaybiyyah was enforced for two years. The tribe of Banu Khuza'a had good relations with Muhammad, whereas their enemies, the Banu Bakr, had allied with the Meccans. A clan of the Bakr made a night raid against the Khuza'a, killing a few of them. The Meccans helped the Banu Bakr with weapons and, according to some sources, a few Meccans also took part in the fighting. After this event, Muhammad sent a message to Mecca with three conditions, asking them to accept one of them. These were: either the Meccans would pay blood money for the slain among the Khuza'ah tribe, they disavow themselves of the Banu Bakr, or they should declare the truce of Hudaybiyyah null. The Meccans replied that they accepted the last condition. Soon they realized their mistake and sent Abu Sufyan to renew the Hudaybiyyah treaty, a request that was declined by Muhammad. Muhammad began to prepare for a campaign. In 630, Muhammad marched on Mecca with 10,000 Muslim converts. With minimal casualties, Muhammad seized control of Mecca. He declared an amnesty for past offences, except for ten men and women who were "guilty of murder or other offences or had sparked off the war and disrupted the peace". Some of these were later pardoned. Most Meccans converted to Islam and Muhammad proceeded to destroy all the statues of Arabian gods in and around the Kaaba. According to reports collected by Ibn Ishaq and al-Azraqi, Muhammad personally spared paintings or frescos of Mary and Jesus, but other traditions suggest that all pictures were erased. The Quran discusses the conquest of Mecca. Conquest of Arabia Following the conquest of Mecca, Muhammad was alarmed by a military threat from the confederate tribes of Hawazin who were raising an army double the size of Muhammad's. The Banu Hawazin were old enemies of the Meccans. They were joined by the Banu Thaqif (inhabiting the city of Ta'if) who |
stations now also provide voice identification. Warships, including those of the U.S. Navy, have long used signal lamps to exchange messages in Morse code. Modern use continues, in part, as a way to communicate while maintaining radio silence. Automatic Transmitter Identification System (ATIS) uses Morse code to identify uplink sources of analog satellite transmissions. Many amateur radio repeaters identify with Morse, even though they are used for voice communications. Applications for the general public An important application is signalling for help through SOS, "". This can be sent many ways: keying a radio on and off, flashing a mirror, toggling a flashlight, and similar methods. The SOS signal is not sent as three separate characters; rather, it is a prosign , and is keyed without gaps between characters. Some Nokia mobile phones offer an option to alert the user of an incoming text message with the Morse tone "" (representing SMS or Short Message Service). In addition, applications are now available for mobile phones that enable short messages to be input in Morse Code. Morse code as an assistive technology Morse code has been employed as an assistive technology, helping people with a variety of disabilities to communicate. For example, the Android operating system versions 5.0 and higher allow users to input text using Morse Code as an alternative to a keypad or handwriting recognition. Morse can be sent by persons with severe motion disabilities, as long as they have some minimal motor control. An original solution to the problem that caretakers have to learn to decode has been an electronic typewriter with the codes written on the keys. Codes were sung by users; see the voice typewriter employing Morse or votem. Morse code can also be translated by computer and used in a speaking communication aid. In some cases, this means alternately blowing into and sucking on a plastic tube ("sip-and-puff" interface). An important advantage of Morse code over row column scanning is that once learned, it does not require looking at a display. Also, it appears faster than scanning. In one case reported in the radio amateur magazine QST, an old shipboard radio operator who had a stroke and lost the ability to speak or write could communicate with his physician (a radio amateur) by blinking his eyes in Morse. Two examples of communication in intensive care units were also published in QST magazine. Another example occurred in 1966 when prisoner of war Jeremiah Denton, brought on television by his North Vietnamese captors, Morse-blinked the word . In these two cases, interpreters were available to understand those series of eye-blinks. Representation, timing, and speeds International Morse code is composed of five elements: short mark, dot or dit (): "dit duration" is one time unit long long mark, dash or dah (): three time units long inter-element gap between the dits and dahs within a character: one dot duration or one unit long short gap (between letters): three time units long medium gap (between words): seven time units long Transmission Morse code can be transmitted in a number of ways: originally as electrical pulses along a telegraph wire, but also as an audio tone, a radio signal with short and long tones, or as a mechanical, audible, or visual signal (e.g. a flashing light) using devices like an Aldis lamp or a heliograph, a common flashlight, or even a car horn. Some mine rescues have used pulling on a rope - a short pull for a dot and a long pull for a dah. Morse code is transmitted using just two states (on and off). Historians have called it the first digital code. Morse code may be represented as a binary code, and that is what telegraph operators do when transmitting messages. Working from the above ITU definition and further defining a bit as a dot time, a Morse code sequence may be made from a combination of the following five bit-strings: short mark, dot or dit (): 1 longer mark, dash or dah (): 111 intra-character gap (between the dits and dahs within a character): 0 short gap (between letters): 000 medium gap (between words): 0000000 Note that the marks and gaps alternate: Dits and dahs are always separated by one of the gaps, and that the gaps are always separated by a dit or a dah. Morse messages are generally transmitted by a hand-operated device such as a telegraph key, so there are variations introduced by the skill of the sender and receiver — more experienced operators can send and receive at faster speeds. In addition, individual operators differ slightly, for example, using slightly longer or shorter dahs or gaps, perhaps only for particular characters. This is called their "fist", and experienced operators can recognize specific individuals by it alone. A good operator who sends clearly and is easy to copy is said to have a "good fist". A "poor fist" is a characteristic of sloppy or hard to copy Morse code. Cable code The very long time constants of 19th and early 20th century submarine communications cables required a different form of Morse signalling. Instead of keying a voltage on and off for varying times, the dits and dahs were represented by two polarities of voltage impressed on the cable, for a uniform time. Timing Below is an illustration of timing conventions. The phrase , in Morse code format, would normally be written something like this, where represents dahs and represents dits: −− −−− ·−· ··· · −·−· −−− −·· · M O R S E C O D E Next is the exact conventional timing for this phrase, with representing "signal on", and representing "signal off", each for the time length of exactly one dit: 1 2 3 4 5 6 7 8 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789 M------ O---------- R------ S---- E C---------- O---------- D------ E ===.===...===.===.===...=.===.=...=.=.=...=.......===.=.===.=...===.===.===...===.=.=...= ^ ^ ^ ^ ^ | dah dit | | symbol space letter space word space Spoken representation Morse code is often spoken or written with dah for dashes, dit for dots located at the end of a character, and di for dots located at the beginning or internally within the character. Thus, the following Morse code sequence: is spoken (or sung): There is little point in learning to read Morse as above; rather, the of all of the letters and symbols need to be learned, for both sending and receiving. Speed in words per minute All Morse code elements depend on the dot length. A dah is the length of 3 dits (with no gaps between), and spacings are specified in number of dit lengths. An unambiguous method of specifying the transmission speed is to specify the dit duration as, for example, 50 milliseconds. Specifying the dit duration is, however, not the common practice. Usually, speeds are stated in words per minute. That introduces ambiguity because words have different numbers of characters, and characters have different dit lengths. It is not immediately clear how a specific word rate determines the dit duration in milliseconds. Some method to standardize the transformation of a word rate to a dit duration is useful. A simple way to do this is to choose a dit duration that would send a typical word the desired number of times in one minute. If, for example, the operator wanted a character speed of 13 words per minute, the operator would choose a dit rate that would send the typical word 13 times in exactly one minute. The typical word thus determines the dot length. It is common to assume that a word is 5 characters long. There are two common typical words: and . mimics a word rate that is typical of natural language words and reflects the benefits of Morse code's shorter code durations for common characters such as and . offers a word rate that is typical of 5 letter code groups (sequences of random letters). Using the word as a standard, the number of dit units is 50 and a simple calculation shows that the dit length at 20 words per minute is 60 milliseconds. Using the word with 60 dit units, the dit length at 20 words per minute is 50 milliseconds. Because Morse code is usually sent by hand, it is unlikely that an operator could be that precise with the dot length, and the individual characteristics and preferences of the operators usually override the standards. For commercial radiotelegraph licenses in the United States, the Federal Communications Commission specifies tests for Morse code proficiency in words per minute and in code groups per minute. The FCC specifies that a word is 5 characters long. The Commission specifies Morse code test elements at 16 code groups per minute, 20 words per minute, 20 code groups per minute, and 25 words per minute. The word per minute rate would be close to the standard, and the code groups per minute would be close to the standard. While the Federal Communications Commission no longer requires Morse code for amateur radio licenses, the old requirements were similar to the requirements for commercial radiotelegraph licenses. A difference between amateur radio licenses and commercial radiotelegraph licenses is that commercial operators must be able to receive code groups of random characters along with plain language text. For each class of license, the code group speed requirement is slower than the plain language text requirement. For example, for the Radiotelegraph Operator License, the examinee must pass a 20 word per minute plain text test and a 16 word per minute code group test. Based upon a 50 dot duration standard word such as , the time for one dit duration or one unit can be computed by the formula: where: is the unit time, or dit duration in milliseconds, and is the speed in . High-speed telegraphy contests are held; according to the Guinness Book of Records in June 2005 at the International Amateur Radio Union's 6th World Championship in High Speed Telegraphy in Primorsko, Bulgaria, Andrei Bindasov of Belarus transmitted 230 Morse code marks of mixed text in one minute. Farnsworth speed Sometimes, especially while teaching Morse code, the timing rules above are changed so two different speeds are used: A character speed and a text speed. The character speed is how fast each individual letter is sent. The text speed is how fast the entire message is sent. For example, individual characters may be sent at a 13 words-per-minute rate, but the intercharacter and interword gaps may be lengthened so the word rate is only 5 words per minute. Using different character and text speeds is, in fact, a | Morse, therefore developed an early forerunner to the modern International Morse code. The Morse system for telegraphy, which was first used in about 1844, was designed to make indentations on a paper tape when electric currents were received. Morse's original telegraph receiver used a mechanical clockwork to move a paper tape. When an electrical current was received, an electromagnet engaged an armature that pushed a stylus onto the moving paper tape, making an indentation on the tape. When the current was interrupted, a spring retracted the stylus and that portion of the moving tape remained unmarked. Morse code was developed so that operators could translate the indentations marked on the paper tape into text messages. In his earliest design for a code, Morse had planned to transmit only numerals, and to use a codebook to look up each word according to the number which had been sent. However, the code was soon expanded by Alfred Vail in 1840 to include letters and special characters, so it could be used more generally. Vail estimated the frequency of use of letters in the English language by counting the movable type he found in the type-cases of a local newspaper in Morristown, New Jersey. The shorter marks were called "dots" and the longer ones "dashes", and the letters most commonly used were assigned the shortest sequences of dots and dashes. This code, first used in 1844, became known as Morse landline code, American Morse code, or Railroad Morse, until the end of railroad telegraphy in the U.S. in the 1970s. Operator-led change from graphical to audible code In the original Morse telegraph system, the receiver's armature made a clicking noise as it moved in and out of position to mark the paper tape. The telegraph operators soon learned that they could translate the clicks directly into dots and dashes, and write these down by hand, thus making the paper tape unnecessary. When Morse code was adapted to radio communication, the dots and dashes were sent as short and long tone pulses. It was later found that people become more proficient at receiving Morse code when it is taught as a language that is heard, instead of one read from a page. With the advent of tones produced by radiotelegraph receivers, the operators began to vocalize a dot as dit, and a dash as dah, to reflect the sounds of Morse code they heard. To conform to normal sending speed, dits which are not the last element of a code became voiced as di. For example, the letter is voiced as Morse code was sometimes facetiously known as "iddy-umpty", a dit lampooned as "iddy" and a dah as "umpty", leading to the word "umpteen". Gerke's refinement of Morse's code The Morse code, as specified in the current international standard, International Morse Code Recommendation, ITU-R M.1677-1, was derived from a much-improved proposal by Friedrich Gerke in 1848 that became known as the "Hamburg alphabet". Gerke changed many of the codepoints, in the process doing away with the different length dashes and different inter-element spaces of American Morse, leaving only two coding elements, the dot and the dash. Codes for German umlauted vowels and were introduced. Gerke's code was adopted in Germany and Austria 1851. This finally led to the International Morse code in 1865. The International Morse code adopted most of Gerke's codepoints. The codes for and were taken from a code system developed by Steinheil. A new codepoint was added for since Gerke did not distinguish between and . Changes were also made to , , and . This left only four codepoints identical to the original Morse code, namely , , and , and the latter two had their dahs extended to full length. The original American code being compared dates to 1838; the later American code shown in the table was developed in 1844. Radiotelegraphy and aviation In the 1890s, Morse code began to be used extensively for early radio communication before it was possible to transmit voice. In the late 19th and early 20th centuries, most high-speed international communication used Morse code on telegraph lines, undersea cables, and radio circuits. Although previous transmitters were bulky and the spark gap system of transmission was dangerous and difficult to use, there had been some early attempts: In 1910, the U.S. Navy experimented with sending Morse from an airplane. However the first regular aviation radiotelegraphy was on airships, which had space to accommodate the large, heavy radio equipment then in use. The same year, 1910, a radio on the airship America was instrumental in coordinating the rescue of its crew. During World War I, Zeppelin airships equipped with radio were used for bombing and naval scouting, and ground-based radio direction finders were used for airship navigation. Allied airships and military aircraft also made some use of radiotelegraphy. However, there was little aeronautical radio in general use during World War I, and in the 1920s, there was no radio system used by such important flights as that of Charles Lindbergh from New York to Paris in 1927. Once he and the Spirit of St. Louis were off the ground, Lindbergh was truly incommunicado and alone. Morse code in aviation began regular use in the mid 1920s. By 1928, when the first airplane flight was made by the Southern Cross from California to Australia, one of its four crewmen was a radio operator who communicated with ground stations via radio telegraph. Beginning in the 1930s, both civilian and military pilots were required to be able to use Morse code, both for use with early communications systems and for identification of navigational beacons that transmitted continuous two- or three-letter identifiers in Morse code. Aeronautical charts show the identifier of each navigational aid next to its location on the map. In addition, rapidly moving field armies could not have fought effectively without radiotelegraphy; they moved more quickly than their communications services could put up new telegraph and telephone lines. This was seen especially in the blitzkrieg offensives of the Nazi German Wehrmacht in Poland, Belgium, France (in 1940), the Soviet Union, and in North Africa; by the British Army in North Africa, Italy, and the Netherlands; and by the U.S. Army in France and Belgium (in 1944), and in southern Germany in 1945. Maritime flash telegraphy and radio telegraphy Radiotelegraphy using Morse code was vital during World War II, especially in carrying messages between the warships and the naval bases of the belligerents. Long-range ship-to-ship communication was by radio telegraphy, using encrypted messages because the voice radio systems on ships then were quite limited in both their range and their security. Radiotelegraphy was also extensively used by warplanes, especially by long-range patrol planes that were sent out by those navies to scout for enemy warships, cargo ships, and troop ships. Morse code was used as an international standard for maritime distress until 1999 when it was replaced by the Global Maritime Distress and Safety System. When the French Navy ceased using Morse code on January 31, 1997, the final message transmitted was "Calling all. This is our last cry before our eternal silence." Demise of commercial telegraphy In the United States the final commercial Morse code transmission was on July 12, 1999, signing off with Samuel Morse's original 1844 message, , and the prosign ("end of contact"). As of 2015, the United States Air Force still trains ten people a year in Morse. The United States Coast Guard has ceased all use of Morse code on the radio, and no longer monitors any radio frequencies for Morse code transmissions, including the international medium frequency (MF) distress frequency of However, the Federal Communications Commission still grants commercial radiotelegraph operator licenses to applicants who pass its code and written tests. Licensees have reactivated the old California coastal Morse station KPH and regularly transmit from the site under either this call sign or as KSM. Similarly, a few U.S. museum ship stations are operated by Morse enthusiasts. Operator proficiency Morse code speed is measured in words per minute () or characters per minute (). Characters have differing lengths because they contain differing numbers of dits and dahs. Consequently, words also have different lengths in terms of dot duration, even when they contain the same number of characters. For this reason, a standard word is helpful to measure operator transmission speed. and are two such standard words. Operators skilled in Morse code can often understand ("copy") code in their heads at rates in excess of 40 . In addition to knowing, understanding, and being able to copy the standard written alpha-numeric and punctuation characters or symbols at high speeds, skilled high speed operators must also be fully knowledgeable of all of the special unwritten Morse code symbols for the standard Prosigns for Morse code and the meanings of these special procedural signals in standard Morse code communications protocol. International contests in code copying are still occasionally held. In July 1939 at a contest in Asheville, North Carolina in the United States Ted R. McElroy W1JYN set a still-standing record for Morse copying, 75.2 . Pierpont (2004) also notes that some operators may have passed 100 . By this time, they are "hearing" phrases and sentences rather than words. The fastest speed ever sent by a straight key was achieved in 1942 by Harry Turner W9YZE (d. 1992) who reached 35 in a demonstration at a U.S. Army base. To accurately compare code copying speed records of different eras it is useful to keep in mind that different standard words (50 dit durations versus 60 dit durations) and different interword gaps (5 dit durations versus 7 dit durations) may have been used when determining such speed records. For example, speeds run with the standard word and the standard may differ by up to 20%. Today among amateur operators there are several organizations that recognize high-speed code ability, one group consisting of those who can copy Morse at 60 . Also, Certificates of Code Proficiency are issued by several amateur radio societies, including the American Radio Relay League. Their basic award starts at 10 with endorsements as high as 40 , and are available to anyone who can copy the transmitted text. Members of the Boy Scouts of America may put a Morse interpreter's strip on their uniforms if they meet the standards for translating code at 5 . Through May 2013, the First, Second, and Third Class (commercial) Radiotelegraph Licenses using code tests based upon the standard word were still being issued in the United States by the Federal Communications Commission. The First Class license required 20 code group and 25 text code proficiency, the others 16 code group test (five letter blocks sent as simulation of receiving encrypted text) and 20 code text (plain language) test. It was also necessary to pass written tests on operating practice and electronics theory. A unique additional demand for the First Class was a requirement of a year of experience for operators of shipboard and coast stations using Morse. This allowed the holder to be chief operator on board a passenger ship. However, since 1999 the use of satellite and very high-frequency maritime communications systems (GMDSS) has made them obsolete. (By that point meeting experience requirement for the First was very difficult.) Currently, only one class of license, the Radiotelegraph Operator License, is issued. This is granted either when the tests are passed or as the Second and First are renewed and become this lifetime license. For new applicants, it requires passing a written examination on electronic theory and radiotelegraphy practices, as well as 16 code-group and 20 text tests. However, the code exams are currently waived for holders of Amateur Extra Class licenses who obtained their operating privileges under the old 20 test requirement. International Morse Code Morse code has been in use for more than 160 years — longer than any other electrical coding system. What is called Morse code today is actually somewhat different from what was originally developed by Vail and Morse. The Modern International Morse code, or continental code, was created by Friedrich Clemens Gerke in 1848 and initially used for telegraphy between Hamburg and Cuxhaven in Germany. Gerke changed nearly half of the alphabet and all of the numerals, providing the foundation for the modern form of the code. After some minor changes, International Morse Code was standardized at the International Telegraphy Congress in 1865 in Paris and was later made the standard by the International Telecommunication Union (ITU). Morse's original code specification, largely limited to use in the United States and Canada, became known as American Morse code or "railroad code". American Morse code is now seldom used except in historical re-enactments. Aviation In aviation, pilots use radio navigation aids. To ensure that the stations the pilots are using are serviceable, the stations transmit a set of identification letters (usually a two-to-five-letter version of the station name) in Morse code. Station identification letters are shown on air navigation charts. For example, the VOR-DME based at Vilo Acuña Airport in Cayo Largo del Sur, Cuba is coded as "UCL", and UCL in Morse code is transmitted on its radio frequency. In some countries, during periods of maintenance, the facility may radiate a T-E-S-T code () or the code may be removed which tells pilots and navigators that the station is unreliable. In Canada, the identification is removed entirely to signify the navigation aid is not |
of events over time on a map Brain mapping, the techniques used to study the brain Data mapping, data element mappings between two distinct data models Gene mapping, the assignment of DNA fragments to chromosomes Mind mapping, the drawing of ideas and the relations among them Projection mapping, the projection of videos on the surface of objects with irregular shapes Robotic mapping, creation and | over time on a map Brain mapping, the techniques used to study the brain Data mapping, data element mappings between two distinct data models Gene mapping, the assignment of DNA fragments to chromosomes Mind mapping, the drawing of ideas and the relations among them Projection mapping, the projection of videos on the surface of objects with irregular shapes Robotic mapping, creation and use |
the United States), Duprisal 30, Ulipristal 30, and UPRIS. The antiprogestin mifepristone (also known as RU-486) is available in five countries as a low-dose or mid-dose emergency contraceptive tablet, effective up to 120 hours after intercourse. Low-dose mifepristone ECPs are available by prescription in Armenia, Russia, Ukraine, and Vietnam and from a pharmacist without a prescription in China. Mid-dose mifepristone ECPs are available by prescription in China and Vietnam. Combined estrogen (ethinylestradiol) and progestin (levonorgestrel or norgestrel) pills used to be available as dedicated emergency contraceptive pills under several brand names: Schering PC4, Tetragynon, Neoprimavlar, and Preven (in the United States) but were withdrawn after more effective dedicated progestin-only (levonorgestrel) emergency contraceptive pills with fewer side effects became available. If other more effective dedicated emergency contraceptive pills (levonorgestrel, ulipristal acetate, or mifepristone) are not available, specific combinations of regular combined oral contraceptive pills can be taken in split doses 12 hours apart (the Yuzpe regimen), effective up to 72 hours after intercourse. The U.S. Food and Drug Administration (FDA) approved this off-label use of certain brands of regular combined oral contraceptive pills in 1997. As of 2014, there are 26 brands of regular combined oral contraceptive pills containing levonorgestrel or norgestrel available in the United States that can be used in the emergency contraceptive Yuzpe regimen, when none of the more effective and better-tolerated options are available. Effectiveness Ulipristal acetate, and mid-dose mifepristone are both more effective than levonorgestrel, which is more effective than the Yuzpe method. The effectiveness of emergency contraception is expressed as a percentage reduction in pregnancy rate for a single use of EC. Using an example of "75% effective", the effectiveness calculation thus: ... these numbers do not translate into a pregnancy rate of 25 percent. Rather, they mean that if 1,000 women have unprotected intercourse in the middle two weeks of their menstrual cycles, approximately 80 will become pregnant. Use of emergency contraceptive pills would reduce this number by 75 percent, to 20 women. The progestin-only regimen (using levonorgestrel) has an 89% effectiveness. , the labeling on the U.S. brand Plan B explained this effectiveness rate by stating, "Seven out of every eight women who would have gotten pregnant will not become pregnant." In 1999, a meta-analysis of eight studies of the combined (Yuzpe) regimen concluded that the best point estimate of effectiveness was 74%. A 2003 analysis of two of the largest combined (Yuzpe) regimen studies, using a different calculation method, found effectiveness estimates of 47% and 53%. For both the progestin-only and Yuzpe regimens, the effectiveness of emergency contraception is highest when taken within 12 hours of intercourse and declines over time. The World Health Organization (WHO) suggested that reasonable effectiveness may continue for up to 120 hours (5 days) after intercourse. For 10 mg of mifepristone taken up to 120 hours (5 days) after intercourse, the combined estimate from three trials was an effectiveness of 83%. A review found that a moderate dose of mifepristone is better than LNG or Yuzpe, with delayed return of menstruation being the main adverse effect of most regimes. HRA Pharma changed its packaging information for Norlevo (levonorgesterel 1.5 mg, which is identical to many other EHCs) in November 2013 warning that the drug loses effectiveness in women who weigh more than 165 pounds and is completely ineffective for women who weigh over 176 pounds. Safety The most common side effect reported by users of emergency contraceptive pills was nausea 14 to 23% of levonorgestrel-only users and 50.5% of Yuzpe regimen users; vomiting is much less common and unusual with levonorgestrel-only ECPs (5.6% of levonorgestrel-only users vs 18.8% of 979 Yuzpe regimen users in 1998 WHO trial; 1.4% of 2,720 levonorgestrel-only users in the 2002 WHO trial). Anti-emetics are not routinely recommended with levonorgestrel-only ECPs. If a woman vomits within 2 hours of taking a levonorgestrel-only ECP, she should take a further dose as soon as possible. Other common side effects (each reported by less than 20% of levonorgestrel-only users in both the 1998 and 2002 WHO trials) were abdominal pain, fatigue, headache, dizziness, and breast tenderness. Side effects generally resolve within 24 hours, although temporary disruption of the menstrual cycle is commonly experienced. If taken before ovulation, the high doses of progestogen in levonorgestrel treatments may induce progestogen withdrawal bleeding a few days after the pills are taken. One study found that about half of women who used levonorgestrel ECPs experienced bleeding within 7 days of taking the pills. If levonorgestrel is taken after ovulation, it may increase the length of the luteal phase, thus delaying menstruation by a few days. Mifepristone, if taken before ovulation, may delay ovulation by 3–4 days (delayed ovulation may result in a delayed menstruation). These disruptions only occur in the cycle in which ECPs were taken; subsequent cycle length is not significantly affected. If a woman's menstrual period is delayed by two weeks or more, it is advised that she take a pregnancy test. (Earlier testing may not give accurate results.) Existing pregnancy is not a contraindication in terms of safety, as there is no known harm to the woman, the course of her pregnancy, or the fetus if progestin-only or combined emergency contraception pills are accidentally used, but EC is not indicated for a woman with a known or suspected pregnancy because it is not effective in women who are already pregnant. The World Health Organization (WHO) lists no medical condition for which the risks of emergency contraceptive pills outweigh the benefits. The American Academy of Pediatrics (AAP) and experts on emergency contraception have concluded that progestin-only ECPs are preferable to combined ECPs containing estrogen for all women, and particularly those with a history of blood clots, stroke, or migraine. There are no medical conditions in which progestin-only ECPs are contraindicated. Current venous thromboembolism, current or history of breast cancer, inflammatory bowel disease, and acute intermittent porphyria are conditions where the advantages of using emergency contraceptive pills generally outweigh the theoretical or proven risks. ECPs, like all other contraceptives, reduce the absolute risk of ectopic pregnancy by preventing pregnancies and there is no increase in the relative risk of ectopic pregnancy in women who become pregnant after using progestin-only ECPs. Interactions The herbal preparation of St John's wort and some enzyme-inducing drugs (e.g. anticonvulsants or rifampicin) may reduce the effectiveness of ECP, and a larger dose may be required, especially in women who weigh more than 165 lbs. Intrauterine device An effective emergency contraception measure is the copper-T intrauterine device (IUD) which is generally recommended up to 5 days after unprotected intercourse or up to 5 days after probable ovulation. Some studies have found effectiveness up to 10 days after unprotected intercourse to prevent pregnancy. A 2021 study found that the hormonal IUD was as effective at emergency contraception as the copper IUD, though it is not offered by clinicians for this purpose. Insertion of an IUD is more effective than the use of emergency contraceptive pillspregnancy rates when used as emergency contraception are the same as with normal IUD use. Unlike emergency contraceptive pills, which work by delaying ovulation, the copper-T IUD works by interfering with sperm motility. Therefore, the copper IUD is equally effective as emergency contraception at all weight ranges. IUDs may be left in place following the subsequent menstruation to provide ongoing contraception for as long as desired (12+ years). As regular contraception One brand of levonorgestrel pills was marketed as an ongoing method of postcoital contraception. However, with typical use, failure rates are expected to be higher than with the of other birth control methods. Like all hormonal methods, postcoital high-dose progestin-only oral contraceptive pills do not protect against sexually transmitted infections. ECPs are generally recommended for backup or "emergency" usefor example, if a woman has forgotten to take a birth control pill or when a condom is torn during sex. However, for individuals facing reproductive coercion, who are not able to use regular birth control, repeated use of EC pills may be the most viable option available. High-risk sex and abortion Making ECPs more widely available does not increase sexual risk-taking. While they are effective for individuals who use them in a timely fashion, the availability of EC pills does not appear to decrease abortion rates at the population level. In 2012 the American Academy of Pediatrics (AAP) stated: "Despite multiple studies showing no increased risk behaviour and evidence that hormonal emergency contraception will not disrupt an established pregnancy, public and medical discourse reflects that personal values of physicians and pharmacists continue to affect emergency-contraception access, particularly for adolescents." EC and sexual assault Beginning in the 1960s, women who had been sexually assaulted were offered DES. Currently, the standard of care is to offer ulipristal or prompt placement of a copper IUD which is the most effective form of EC. However, adherence to these best practices varies by the emergency department. Before these EC options were available (in 1996), pregnancy rates among females of child-bearing age who had been raped were around 5%. Although EC is recommended following sexual assault, room for improvement in clinical practice remains. Mechanism of action The primary mechanism of action of progestogen-only emergency contraceptive pills is to prevent fertilization by inhibition of ovulation. The best available evidence is that they do not have any post-fertilization effects such as the prevention of implantation. The U.S. FDA-approved labels and European EMA-approved labels (except for HRA Pharma's NorLevo) levonorgestrel emergency contraceptive pills (based on labels for regular oral contraceptive pills) say they may cause endometrial changes that discourage implantation. Daily use of regular oral contraceptive pills can alter the endometrium (although this has not been proven to interfere with implantation), but the isolated use of a levonorgestrel emergency contraceptive pill does not have time to alter the endometrium. In March 2011, the International Federation of Gynecology and Obstetrics (FIGO) issued a statement that: "review of the evidence suggests that LNG [levonorgestreol] ECPs cannot prevent implantation of a fertilized egg. Language on implantation should not be included in LNG ECP product labeling." In June 2012, a New York Times editorial called on the FDA to remove from the label the unsupported suggestion that levonorgestrel emergency contraceptive pills inhibit implantation. In November 2013, the European Medicines Agency (EMA) approved a change to the label for HRA Pharma's NorLevo saying it cannot prevent implantation of a fertilized egg. Progestogen-only emergency contraceptive does not appear to affect the function of the Fallopian tubes or increase the rate of ectopic pregnancies. The primary mechanism of action of progesterone receptor modulator emergency contraceptive pills like low-dose and mid-dose mifepristone and ulipristal acetate is to prevent fertilization by inhibition or delay of ovulation. One clinical study found that post-ovulatory administration of ulipristal acetate altered the endometrium, but whether the changes would inhibit implantation is unknown. The European EMA-approved labels for ulipristal acetate emergency contraceptive pills do not mention an effect on implantation, but the U.S. FDA-approved label says: "alterations to the endometrium that may affect implantation may also contribute to efficacy." The primary mechanism of action of copper-releasing intrauterine devices (IUDs) as emergency contraceptives is to prevent fertilization because of copper toxicity to sperm and ova. The very high effectiveness of copper-releasing IUDs as emergency contraceptives implies that they must also prevent some pregnancies by post-fertilization effects such as prevention of implantation. History In 1966, gynecologist John McLean Morris and biologist Gertrude Van Wagenen at the Yale School of Medicine, reported the successful use of oral high-dose estrogen pills as post-coital contraceptives in women and rhesus macaque monkeys, respectively. A few different drugs were studied, with a focus on high-dose estrogens, and it was originally hoped that postcoital contraception would prove viable as an ongoing contraceptive method. The first widely used methods were five-day treatments with high-dose estrogens, using diethylstilbestrol (DES) in the US and ethinylestradiol in the Netherlands by Haspels. In the early 1970s, the Yuzpe regimen was developed by A. Albert Yuzpe in 1974; progestin-only postcoital contraception was investigated (1975); and the copper IUD was first studied for use as emergency contraception (1975). Danazol was tested in the early 1980s in the hopes that it would have fewer side effects than Yuzpe, but was found to be ineffective. The Yuzpe regimen became the standard course of treatment for postcoital contraception in many countries in the 1980s. The first prescription-only combined estrogen-progestin dedicated product, Schering PC4 (ethinylestradiol and norgestrel), was approved in the UK in January 1984 and first marketed in October 1984. Schering introduced a second prescription-only combined product, Tetragynon (ethinylestradiol and levonorgestrel) in Germany in 1985. By 1997, Schering AG dedicated prescription-only combined products had been approved in only 9 countries: the UK (Schering PC4), New Zealand (Schering PC4), South Africa (E-Gen-C), Germany (Tetragynon), Switzerland (Tetragynon), Denmark (Tetragynon), Norway (Tetragynon), Sweden (Tetragynon) and Finland (Neoprimavlar); and had been withdrawn from marketing in New Zealand in 1997 to prevent it being sold over-the-counter. Regular combined oral | and pharmacists continue to affect emergency-contraception access, particularly for adolescents." EC and sexual assault Beginning in the 1960s, women who had been sexually assaulted were offered DES. Currently, the standard of care is to offer ulipristal or prompt placement of a copper IUD which is the most effective form of EC. However, adherence to these best practices varies by the emergency department. Before these EC options were available (in 1996), pregnancy rates among females of child-bearing age who had been raped were around 5%. Although EC is recommended following sexual assault, room for improvement in clinical practice remains. Mechanism of action The primary mechanism of action of progestogen-only emergency contraceptive pills is to prevent fertilization by inhibition of ovulation. The best available evidence is that they do not have any post-fertilization effects such as the prevention of implantation. The U.S. FDA-approved labels and European EMA-approved labels (except for HRA Pharma's NorLevo) levonorgestrel emergency contraceptive pills (based on labels for regular oral contraceptive pills) say they may cause endometrial changes that discourage implantation. Daily use of regular oral contraceptive pills can alter the endometrium (although this has not been proven to interfere with implantation), but the isolated use of a levonorgestrel emergency contraceptive pill does not have time to alter the endometrium. In March 2011, the International Federation of Gynecology and Obstetrics (FIGO) issued a statement that: "review of the evidence suggests that LNG [levonorgestreol] ECPs cannot prevent implantation of a fertilized egg. Language on implantation should not be included in LNG ECP product labeling." In June 2012, a New York Times editorial called on the FDA to remove from the label the unsupported suggestion that levonorgestrel emergency contraceptive pills inhibit implantation. In November 2013, the European Medicines Agency (EMA) approved a change to the label for HRA Pharma's NorLevo saying it cannot prevent implantation of a fertilized egg. Progestogen-only emergency contraceptive does not appear to affect the function of the Fallopian tubes or increase the rate of ectopic pregnancies. The primary mechanism of action of progesterone receptor modulator emergency contraceptive pills like low-dose and mid-dose mifepristone and ulipristal acetate is to prevent fertilization by inhibition or delay of ovulation. One clinical study found that post-ovulatory administration of ulipristal acetate altered the endometrium, but whether the changes would inhibit implantation is unknown. The European EMA-approved labels for ulipristal acetate emergency contraceptive pills do not mention an effect on implantation, but the U.S. FDA-approved label says: "alterations to the endometrium that may affect implantation may also contribute to efficacy." The primary mechanism of action of copper-releasing intrauterine devices (IUDs) as emergency contraceptives is to prevent fertilization because of copper toxicity to sperm and ova. The very high effectiveness of copper-releasing IUDs as emergency contraceptives implies that they must also prevent some pregnancies by post-fertilization effects such as prevention of implantation. History In 1966, gynecologist John McLean Morris and biologist Gertrude Van Wagenen at the Yale School of Medicine, reported the successful use of oral high-dose estrogen pills as post-coital contraceptives in women and rhesus macaque monkeys, respectively. A few different drugs were studied, with a focus on high-dose estrogens, and it was originally hoped that postcoital contraception would prove viable as an ongoing contraceptive method. The first widely used methods were five-day treatments with high-dose estrogens, using diethylstilbestrol (DES) in the US and ethinylestradiol in the Netherlands by Haspels. In the early 1970s, the Yuzpe regimen was developed by A. Albert Yuzpe in 1974; progestin-only postcoital contraception was investigated (1975); and the copper IUD was first studied for use as emergency contraception (1975). Danazol was tested in the early 1980s in the hopes that it would have fewer side effects than Yuzpe, but was found to be ineffective. The Yuzpe regimen became the standard course of treatment for postcoital contraception in many countries in the 1980s. The first prescription-only combined estrogen-progestin dedicated product, Schering PC4 (ethinylestradiol and norgestrel), was approved in the UK in January 1984 and first marketed in October 1984. Schering introduced a second prescription-only combined product, Tetragynon (ethinylestradiol and levonorgestrel) in Germany in 1985. By 1997, Schering AG dedicated prescription-only combined products had been approved in only 9 countries: the UK (Schering PC4), New Zealand (Schering PC4), South Africa (E-Gen-C), Germany (Tetragynon), Switzerland (Tetragynon), Denmark (Tetragynon), Norway (Tetragynon), Sweden (Tetragynon) and Finland (Neoprimavlar); and had been withdrawn from marketing in New Zealand in 1997 to prevent it being sold over-the-counter. Regular combined oral contraceptive pills (which were less expensive and more widely available) were more commonly used for the Yuzpe regimen even in countries where dedicated products were available. Over time, interest in progestin-only treatments increased. The Special Program on Human Reproduction (HRP), an international organization whose members include the World Bank and World Health Organization, "played a pioneering role in emergency contraception" by "confirming the effectiveness of levonorgestrel." After the WHO conducted a large trial comparing Yuzpe and levonorgestrel in 1998, combined estrogen-progestin products were gradually withdrawn from some markets (Preven in the United States discontinued May 2004, Schering PC4 in the UK discontinued October 2001, and Tetragynon in France) in favor of progestin-only EC, although prescription-only dedicated Yuzpe regimen products are still available in some countries. In 2002, China became the first country in which mifepristone was registered for use as EC. In 2020, Japan announced it would consider easing regulations on the sale of emergency contraceptive pills without a prescription. Non-profit groups submitted a petition to the health ministry calling for prescription-free access to the pill. They had collected more than 100,000 signatures. Calculating effectiveness Early studies of emergency contraceptives did not attempt to calculate a failure rate; they simply reported the number of women who became pregnant after using an emergency contraceptive. Since 1980, clinical trials of emergency contraception have first calculated probable pregnancies in the study group if no treatment were given. The effectiveness is calculated by dividing observed pregnancies by the estimated number of pregnancies without treatment. Placebo-controlled trials that could give a precise measure of the pregnancy rate without treatment would be unethical, so the effectiveness percentage is based on estimated pregnancy rates. These are currently estimated using variants of the calendar method. Women with irregular cycles for any reason (including recent hormone use such as oral contraceptives and breastfeeding) must be excluded from such calculations. Even for women included in the calculation, the limitations of calendar methods of fertility determination have long been recognized. In their February 2014 emergency review article, Trussell and Raymond note: Calculation of effectiveness, and particularly the denominator of the fraction, involves many assumptions that are difficult to validate. The risk of pregnancy for women requesting ECPs appears to be lower than assumed in the estimates of ECP efficacy, which are consequently likely to be overestimates. Yet, precise estimates of efficacy may not be highly relevant to many women who have had unprotected intercourse, since ECPs are often the only available treatment. In 1999, hormonal assay was suggested as a more accurate method of estimating fertility for EC studies. United States DES In 1971, a New England Journal of Medicine editorial calling attention to previously published studies on the use of DES as a postcoital contraceptive at Yale University, and a large study published in JAMA on the use of DES as a postcoital contraceptive at the University of Michigan, led to off-label use of DES as a postcoital contraceptive becoming prevalent at many university health services. In May 1973, in an attempt to restrict off-label use of DES as a postcoital contraceptive to emergency situations such as rape, a FDA Drug Bulletin was sent to all U.S. physicians and pharmacists that said the FDA had approved, under restricted conditions, postcoital contraceptive use of DES. (In February 1975, the FDA Commissioner testified that the only error in the May 1973 FDA Drug Bulletin was that the FDA had not approved postcoital contraceptive use of DES). In September 1973, the FDA published a proposed rule specifying patient labeling and special packaging requirements for any manufacturer seeking FDA approval to market DES as a postcoital contraceptive, inviting manufacturers to submit abbreviated new drug applications (ANDAs) for that indication, and notifying manufacturers that the FDA intended to order the withdrawal of DES 25 mg tablets (which were being used off-label as postcoital contraceptives). In late 1973, Eli Lilly, the largest U.S. manufacturer of DES, discontinued its DES 25 mg tablets and in March 1974 sent a letter to all U.S. physicians and pharmacists telling them it did not recommend use of DES as a postcoital contraceptive. Only one pharmaceutical company, Tablicaps, Inc., a small manufacturer of generic drugs, ever submitted (in January 1974) an ANDA for use of DES as an emergency postcoital contraceptive, and the FDA never approved it. In February 1975, the FDA said it had not yet approved DES as a postcoital contraceptive, but would after March 8, 1975, permit marketing of DES for that indication in emergency situations such as rape or incest if a manufacturer obtained an approved ANDA that provided patient labeling and special packaging as set out in a FDA final rule published in February 1975. To discourage off-label use of DES as a postcoital contraceptive, in February 1975 the FDA ordered DES 25 mg (and higher) tablets removed from the market and ordered the labeling of lower doses (5 mg and lower) of DES still approved for other indications be changed to state: "THIS DRUG PRODUCT SHOULD NOT BE USED AS A POSTCOITAL CONTRACEPTIVE" in block capital letters on the first line of the physician prescribing information package insert and in a prominent and conspicuous location of the container and carton label. In March 1978, a FDA Drug Bulletin was sent to all U.S. physicians and pharmacists which said: "FDA has not yet given approval for any manufacturer to market DES as a postcoital contraceptive. The Agency, however, will approve this indication for emergency situations such as rape or incest if a manufacturer provides patient labeling and special packaging. To discourage 'morning after' use of DES without patient labeling, FDA has removed from the market the 25 mg tablets of DES, formerly used for this purpose." In the 1980s, off-label use of the Yuzpe regimen superseded off-label use of DES for postcoital contraception. DES is no longer commercially available in the U.S.; Eli Lilly, the last U.S. manufacturer, ceased production in spring 1997. Preven On February 25, 1997, the FDA posted a notice in the Federal Register saying it had concluded that the Yuzpe regimen was safe and effective for off-label use as postcoital EC, was prepared to accept NDAs for COCPs labeled as ECPs, and listed 6 then available COCPs (there are now 22) that could be used as ECPs. On September 1, 1998, the FDA approved the prescription Yuzpe regimen Preven Emergency Contraception Kit (which contained a urine pregnancy test and 4 COCPs). Preven was discontinued in May 2004. Plan B On July 28, 1999, the FDA approved the prescription progestin-only Plan B (two 750 µg levonorgestrel pills) emergency contraceptive. On August 24, 2006, the FDA approved nonprescription behind-the-counter access to Plan B from pharmacies staffed by a licensed pharmacist for women 18 or older; a prescription-only form of Plan B was made available for younger females aged 17 and younger. On November 6, 2006, Barr Pharmaceuticals announced that its subsidiary, Duramed Pharmaceuticals, had initiated shipment of dual-label Plan B OTC/Rx and it would be available in pharmacies across the U.S. by mid-November 2006. On March 23, 2009, a US judge ordered the FDA to allow 17-year-olds to acquire Plan B without a prescription. This now changes the August 24, 2006, ruling and Plan B is now available "behind the counter" for men and women. There is a prescription method available for girls under 17. On April 30, 2013, the FDA approved (with three-year marketing exclusivity) Teva Pharmaceutical Industries' Plan B One-Step for sale without a prescription to anyone age 15 or over who can show proof of age such as a driver's license, birth certificate, or passport to a drug store retail clerk. Generic one-pill levonorgestrel emergency contraceptives and all two-pill levonorgestrel emergency contraceptives will remain restricted to sale from a pharmacist—without a prescription to anyone age 17 or over who can show proof of age. On June 10, 2013, the Obama administration ceased trying to block over-the-counter availability of the pill. With this reversal it means that any person will be able to purchase the Plan B One-Step without a prescription. Availability The COVID-19 pandemic in the United Kingdom was reported to have caused “significant disruption” to contraceptive services in the United Kingdom. United States After Roe v. Wade and Doe v. Bolton resulted in the U.S. Supreme Court's 1973 ruling to legalize abortion, both Federal and State laws were created in order to allow medical professionals and institutions the right to deny reproductive health services without financial, professional, or legal penalty. Roe v. Wade caused a historical survey to be conducted and concluded that right to privacy cases such as Griswold v. Connecticut allowed women to have parental control over childrearing, including the use of contraception for reproductive autonomy. After this, women became more informed about contraceptives and began requesting them more often. Almost |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.